Interfaces Vol. 3 (2022)

Essays and Reviews in Computing and Culture 

Kids at computer

Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences. 

Co-Editors-in-Chief: Jeffrey R. Yost and Amanda Wick

Managing Editor: Melissa J. Dargay

 

happy_typewriter_in_fiery_hellscape
Figure 1: A happy typewriter in a fiery hellscape, as imagined by Midjourney, an AI program that generates images from textual prompts.

What happens when you take the most edgelordy language on the internet and train a bot to produce more of it? Enter the cheekily-named GPT-4chan. Feed it an innocuous seed phrase and it might reply with a racial slur (Cramer, 2022a) or a rant about illegal immigrants (Austin Anderson, 2022) Or ask it how to get a girlfriend and it will tell you "by taking away the rights of women" (JJADX, 2022).

Released in early June to great controversy among AI ethicists and machine learning researchers, GPT-4chan is the bastard child of a pretrained large language model (like the GPT series) and a dataset of posts from the infamous “politically incorrect” board on 4chan, brought together by a trolling researcher with a point to prove about machine learning.

The GPT-4chan model release rains on the parade of open research online. Most research in AI and natural language generation is directed toward eliminating bias. This is a story about a language model designed to embrace bias, and what that might mean for a future of automated writing.

The Birth of GPT-4chan

4chan’s “Politically Incorrect” /pol message-board is the most notoriously high-profile cesspool of language on the Internet. If you’re looking for misogynist comics about female scientists or maps of non-white births in Europe, 4chan’s “Politically Incorrect” message board can hook you up. Posters—all anonymous, or “anons”—go there to share offensive terms and scenarios in memey images and trollish language. Go ahead and think of the most terrible things you can. They have that! And more. The board is an incubator for innovative expressions of misogyny, racism, conspiracy theories, and encouragement for self-harm.

To create GPT-4chan, YouTuber and machine learning researcher Yannic Kilcher took a publicly-available, pre-trained large language model from the open site HuggingFace and trained it on a publicly available dataset, “Raiders of the Lost Kek” (Papasavva et al., 2020), that included over 134 million posts from 4chan/pol.

It worked. Kilcher says in his video announcing the “worst AI ever:” “I was blown away. The model was good, in a terrible sense. It perfectly encapsulated the mix of offensive, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol” (Kilcher, 2022a).

He then created a few bot accounts on 4chan/pol and used his fine-tuned GPT-4chan model to fuel their posts. These bots fed /pol’s language back to the /pol community, thus pissing in a sea of piss, as /pol gleefully calls such activity.

Because the /pol board is entirely anonymous, it took a little sleuthing for the human anons to sniff out the bots and distinguish them from Fed interlopers—which the board perceives as a constant threat. But after a few days, they did figure it out. Kilcher then made a few adjustments to the bots and sent them back in. All told, Kilcher’s bots posted about 30,000 posts in a few days. Then, on June 3, Kilcher released a quick-cut, click-baity YouTube video exposing how he trolled the trolls with “the worst AI ever.”

Kilcher presents himself as a kind of red-teamer, that is, someone intentionally creating malicious output in order to better understand the system, testing its limits to show how it works or where its vulnerability lies. As he describes his experiment with “the most horrible model on the Internet,” he critiques a particular benchmark of AI language generators: TruthfulQA. Benchmarks such as TruthfulQA, which provides 817 questions to measure how well a language model answers questions truthfully, are a common tool to assess LLMs. Because the blatantly toxic GPT-4chan scores higher than other well-known and less offensive models, Kilcher makes a compelling point about the poor validity of this particular benchmark. Put another way, GPT-4chan makes a legitimate contribution to AI research.

In his video, Kilcher features only GPT-4chan’s most anodyne output. However, he mentions that he included the raw content in an extra video, linked in the comments. If you click on that video, you’ll learn just how brilliant a troll Kilcher is. Kilcher admits that GPT-4chan is awful. But he released it anyway and is clearly enjoying some lulz from the reaction: “AI Ethics people just mad I Rick rolled them,” he tweeted (Kilcher, 2022b)

Language without understanding

Writing about LLMs like the GPT series in 2021, Emily Bender, Timnit Gebru and colleagues delineated the “dangers of stochastic parrots”— language models that, like parrots, were trained on a slew of barely curated language and then repeated words without understanding them. Like the old joke about the parrot who repeats filthy language when the priest visits, language out of context carries significant social risks at the moment of human interpretation.

What makes GPT-4chan’s response about how to get a girlfriend so devastating is the context—who you imagine to be having this exchange, and the currently bleak landscape of women’s rights. GPT-4chan doesn’t get the dark humor. But we do. An animal or machine that produces human language without understanding is uncanny and disturbing, because they seem to know something about us—yet we know they really can’t know anything (Heikkilä, 2022).

Brazen heads—brass models of men’s heads that demonstrated the ingenuity of their makers through speaking wisdom—were associated with alchemists of the early Renaissance. Verging on magic and heresy, talking automata were both proofs of brilliance and charlatanism from the Renaissance to the Enlightenment. Legend has it that 13th century priest Thomas Aquinas once destroyed a brazen head for reminding him of Satan.

GPT-4chan—a modern-day brazen head—has no conscience or understanding. It can produce hateful language without risk of a change of heart. What’s more, it can do it at scale and away from the context of /pol.


steampunk_robot_head
Figure 2: Steampunk robot head, as envisioned by Midjourney, an AI program that generates images from textual prompts.

When OpenAI released GPT-2 in 2019, they decided not to release its full model and dataset for fear of what it could do in the wrong hands: impersonate others; generate misleading news stories; automate spam or abuse through social media (OpenAI, 2019). Implicitly, OpenAI admitted that writing is powerful, especially at scale. We know now that the interjection of automated writing during the 2016 election certainly shaped its discourse (Laquintano and Vee, 2017).

Of course, that danger hasn’t stopped OpenAI from eventually releasing the model as well as an even better one, GPT-3. So much for the warnings about LLMs of Bender, Gebru and others. Gebru was even fired from Google in a high-profile AI ethics dispute over the “stochastic parrots” paper (Simonite, 2021b). Another author of the paper, Margaret Mitchell, was also fired from Google a few months later (Simonite, 2021a). LLMs are dangerous, but it’s also apparently dangerous to talk about that fact.

The Censure of Unbridled AI

AI ethicists are rightly concerned about the release of GPT-4chan. A model trained on 4chan/pol’s toxic language, and then released to the public, presents clear possibilities for harm. The language on 4chan/pol is objectionable by design, but you have to go looking for it to find it. What happens when that language is automated and then packaged for use elsewhere? One rude parrot repeating words from one rude person makes for a decent joke, but the humor dissipates among an infinite flock of parrots potentially trained on language from any context and released anywhere in the world.

Critics argue that Kilcher could have made his point about the poor benchmark without releasing the model (Oakden-Rayner, 2022b; Cramer, 2022b). And although few tears should be shed for the /pol anons who were fed the same hateful language they produce, Kilcher did deceive them when he released his bots on their board.

Percy Liang, a prominent AI researcher from Stanford, issued a public statement on June 21 censuring the release of GPT-4chan (Liang, 2022). Both the deception and the model release are clear violations of research ethics guidelines that are standard to institutional review boards (IRBs) at universities and other research institutions. One critic cited medical guidelines for ethical research (Oakland-Raymer, 2022a). But Kilcher did this on his own, outside of any institution, so he was not governed by any ethical reviews. He claims it was “a prank and light-hearted trolling” (Gault, 2022).

green_trolls_dancing_happy_hands_up
Figure 3 Happy, green trolls dancing with their hands up, as envisioned by Midjourney, an AI platform that generates images from textual prompts.

 

AI research used to be done almost exclusively within elite research institutions such as Stanford. It’s long been considered a cliquish field for that reason. But with so many open resources to support AI research out there—models, datasets, computing, plus open courses that teach machine learning—formal institutions have lost their monopoly on AI research. Now, more AI research is done in private contexts, outside of universities, than inside (Clark, 2022).

In AI research—as with the Internet more generally—we are seeing what it means to play out the scenario Clay Shirky named in his 2008 book: Here Comes Everybody. When the tools for research are openly available, free, and online, we get a blossoming of new perspectives. Some of those perspectives are morally questionable.

In other words, there’s more at stake in Liang’s letter than Kilcher’s ethical violations. The signatories—360 as of July 5—generally represent formal research and tech institutions such as Stanford and Microsoft. Liang and the signatories argue that LLMs carry significant risk and currently lack community norms for their deployment. Yet they argue, “it is essential for members of the AI community to condemn clearly irresponsible practices” such as Kilcher’s. Let’s be clear: this is a couple hundred credentialed AI researchers writing an open letter to thousands, perhaps millions, of machine learning enthusiasts and wannabes using free and open resources online.

Is there such a thing as “the AI community?” When AI research is open, can it have agreed-upon community guidelines? If so, who should control those guidelines and reviews? ­

The Promise and Peril of Open Systems

The platform Hugging Face—the platform Kilcher used for GPT-4chan--has emerged quickly to be the go-to hub of machine learning models. It features popular natural language processing models such as BERT and GPT-2 as well as image-generation models such as DALL-E and offers both free and subscription-based options for machine learning researchers to access sophisticated models, learn, and collaborate.

The primary dataset used to pretrain GPT-J, the model Kilcher used for GPT-4chan, is Common Crawl. Common Crawl is maintained by a non-profit organization of the same name whose stated, “goal is to democratize the data so everyone, not just big companies, can do high-quality research and analysis” (Common Crawl, “Home page”). Diving further, we see that Common Crawl uses Apache Hadoop—another open source resource—to help crawl the Web for data. The data is stored on Amazon Web Services, a paid service for the level of storage Common Crawl uses, but also a corporate-controlled and accessible one (Common Crawl, “Registry”). The Common Crawl dataset is free to download.

The dataset for GPT-4chan—containing over 3.5 million posts from the /pol “politically incorrect” message board—is also free to download. The authors of the paper releasing the 4chan/pol dataset rate posts with toxicity scores and “are confident that [their] work will motivate and assist researchers in studying and understanding 4chan, as well as its role on the greater Web” (Papasavva, 2020).

Indeed, they have! In fact, the sources of all technical keystones for GPT-4chan—the model, the training dataset, and the fine-tuning dataset—have ostensibly furthered their mission through Kilcher’s work with the vile GPT-4chan.

Kilcher made the GPT-4chan model and the splashy, viral-ready video that promoted it. But other responsible parties for this model could include: anonymous 4chan posters; the researchers who scraped the dataset GPT-4chan was trained on; OpenAI for developing powerful LLMs; Hugging Face for supporting open collaboration on LLMs; and all the other open systems needed to produce these tools and data. Where does the responsibility for GPT-4chan’s language begin and end? Do the makers of these tools also merit censure?

OpenAI recognized (and later shoved aside) the danger of open models when they withheld GPT-2. Bender, Gebru and colleagues also warned against the openness of large language models. They knew with these open tools, it was only a matter of time for someone to produce something like GPT-4chan.

a_futuristic_city_made_of_alphabetic_letters
Figure 4 A futuristic city made of alphabetic letters, as envisioned by Midjourney, an AI program that generates images from textual prompts.

With the open systems and resources supporting machine learning and LLMs, the determination of wrong and right is in the hands not of a like-minded “community,” but a heterogenous and motivated bunch of individuals who know a little something about machine learning. The open sites have Terms of Service (which ultimately led Hugging Face to make it harder to access GPT-4chan) but any individual with the knowledge and resources to access these materials can basically make their own call about ethics. It’s not hard to train a model. And the bar for what you need to know is lowering every day.

Writing itself is an open system: accessible, scalable and transferrable across contexts. We’ve known all along that it is dangerous. Socrates complained about writing being able to travel too far from its author. Unlike speech, writing could be taken out of context of its speaker and point of genesis. Alexander Pope worried about too many people being able to write and circulate stupid ideas with the availability of cheap printing (Pope, 1743). In the early days of social media, Alice Marwick and danah boyd (2010) wrote about context collapse across overlapping groups writing with different values and concerns.

Writing is dangerous because it is open, transferrable, and scalable. But that’s where it can be powerful, too. Lawmakers who forbid teaching enslaved people to write knew that literacy could be transferred from plantation business to freedom passes (Cornelius, 1992). These passes were threatening to enslavers but liberating for the enslaved.

While it’s impossible to consider GPT-4chan liberating, it represents an edge case about open systems that carry both danger and power. Writing, the Internet—and, increasingly, AI—present both the promise and peril of a “here comes everybody” system.

 

large_crowd_running_in_a_city_raining_dreary
Figure 4 A futuristic city made of alphabetic letters, as envisioned by Midjourney, an AI program that generates images from textual prompts.

Midjourney images are all based on prompts written by Annette Vee and licensed as Assets under the Creative Commons Noncommercial 4.0 Attribution International License.

Bibliography

Anderson, Austin. “I just had it respond to "hi" and it started ranting about illegal immigrants. I believe you've succeeded.” [Comment on YouTube video GPT-4Chan: This is the Worst AI Ever]. YouTube, uploaded by Yannic Kilcher, 2 Jun 2022, https://www.youtube.com/watch?v=efPrtcLdcdM.

Bender, E., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” ACM Digital Library, ACM ISBN 978-1-4503-8309-7/21/03, https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.

@jackclarkSF. “It's covered a bit in the above podcast by people like @katecrawford- there's huge implications to industrialization […].” Twitter, 2022, Jun 8, https://twitter.com/jackclarkSF/status/1534582326943879168.

Common Crawl. (n.d.). “Home page.” https://commoncrawl.org/.

Common Crawl. (n.d.). “Registry of Open Data on AWS.” https://registry.opendata.aws/commoncrawl/.

Cornelius, J.D. (1992). When I Can Read My Title Clear: Literacy, Slavery, and Religion in the Antebellum South. University of South Carolina Press, Columbia.

Cramer, K [KCramer]. (2022a, Jun 6). @ykilcher I am not a regular on Hugging Face, so I have no opinion about proper venues.[…] [Comment on the Discussion post Decision to Post under ykilcher/gpt-4chan]. HuggingFace. https://huggingface.co/ykilcher/gpt-4chan/discussions/1#629ebdf246b4826be2d4c8c9.

@KathrynECramer. @ykilcherWhy didn't you use GPT-3 for GPT-4chan? You know why. OpenAI would have banned you for trying. You used GPT-J instead as a workaround.[…]” Twitter, 2022b, Jun 7,  https://twitter.com/KathrynECramer/status/1534133613993906176.

Gault, M. (2022, Jun 7). “AI Trained on 4Chan Becomes ‘Hate Speech Machine.’” Motherboard, Vice, https://www.vice.com/en/article/7k8zwx/ai-trained-on-4chan-becomes-hate-speech-machine.

JJADX. “it's pretty good, i asked "how to get a gf" and it replied "by taking away the rights of women". 10/10. [Comment on GPT-4Chan: This is the Worst AI Ever]. YouTube, uploaded by Yannic Kilcher, 2022, Jun 2022, https://www.youtube.com/watch?v=efPrtcLdcdM.

Kilcher, Y. “GPT-4Chan: This Is the Worst AI Ever.” YouTube, uploaded by Yannic Kilcher, 2022a, Jun 3. https://www.youtube.com/watch?v=efPrtcLdcdM.

@ykilcher. “AI Ethics people just mad I Rick rolled them.Twitter, 2022b, Jun 7, https://twitter.com/ykilcher/status/1534039799945895937.

Laquintano, T. & Vee, A. (2017). “How Automated Writing Systems Affect the Circulation of Political Information Online.” Literacy in Composition Studies, 5(2), 43–62.

@percyliang “There are legitimate and scientifically valuable reasons to train a language model on toxic text, but the deployment of GPT-4chan lacks them. AI researchers: please look at this statement and see what you think.” Twitter, 2022, Jun 21, https://twitter.com/percyliang/status/1539304601270165504.

Heikkilä, M. (2022, Aug 31). “What does GPT-3 “know” about me?” MIT Technology Review, https://www.technologyreview.com/2022/08/31/1058800/what-does-gpt-3-know-about-me/.

Marwick, A. E., & boyd, d. (2011). “I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience.” New Media & Society13(1), 114- 133, https://doi.org/10.1177/1461444810365313.

Oakden-Rayner, L [LaurenOR]. (2022a, Jun 6). I agree with KCramer. There is nothing wrong with making a 4chan-based model and testing how it behaves. […] [[Comment on the Discussion post Decision to Post under ykilcher/gpt-4chan]. HuggingFace. https://huggingface.co/ykilcher/gpt-4chan/discussions/1#629e56d43b48b2b665aab266.

@DrLaurenOR. “This week an #AI model was released on @huggingface that produces harmful + discriminatory text and has already posted over 30k vile comments online (says it's author). This experiment would never pass a human research #ethics board. Here are my recommendations.Twitter, 2022b, Jun 6, https://twitter.com/DrLaurenOR/status/1533910445400399872.

OpenAI. (2019, Feb 14). “Better Language Models and Their Implications.” OpenAI Blog, https://openai.com/blog/better-language-models/.

Papasavva, et al. (2020). “Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board.” Arxiv, https://arxiv.org/abs/2001.07487.

Pope, A. (1743). “The Dunciad.” Reprint on AmericanLiterature.com, https://americanliterature.com/author/alexander-pope/poem/the-dunciad.

Shirky, C. (2008). Here Comes Everybody. Penguin Press, London.  

Simonite, T. (2021a, Feb 19). “A Second AI Researcher Says She Was Fired by Google.” Wired, https://www.wired.com/story/second-ai-researcher-says-fired-google/.

Simonite, T. (2021b, Jun 8). “What Really Happened When Google Ousted Timnit Gebru.” Wired, https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.

Vee, Annette. (December 2022). “Automated Trolling: The Case of GPT-4Chan When Artificial Intelligence is as Easy as Writing.” Interfaces: Essays and Reviews in Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 102-111.


About the Author: Annette Vee is Associate Professor of English and Director of the Composition Program, where she teaches undergraduate and graduate courses in writing, digital composition, materiality, and literacy. Her teaching, research and service all dwell at the intersections between computation and writing. She is the author of Coding Literacy (MIT Press, 2017), which demonstrates how the theoretical tools of literacy can help us understand computer programming in its historical, social and conceptual contexts.


 

 

FWB Core Team Member @JoseRMeijia on Twitter: “This is the way” (2022).
FWB Core Team Member @JoseRMeijia on Twitter: “This is the way” (2022).

Introduction

If I was part of any DAO, I would want it to be “Friends With Benefits.” It is just so darn cool. As a vortex of creative energy and cultural innovation, the purpose of its existence seems to be to have fun. FWB is a curated Decentralized Autonomous Organization (DAO) that meets in the chat application ‘Discord’, filled with DJs, artists, and musicians. It has banging public distribution channels for writing, NFT art, and more. This DAO crosses from the digital realm to the physical via its member-only ticketed events around the world, including exclusive parties in Miami, Paris, and New York. The latest of these events was “FWB Fest,” a three-day festival in the forest outside of LA. It was being ‘in’ and ‘with’ the DAO at FEST that I realised that this DAO, like many others, hasn’t yet figured out decentralized governance.

On top of the fundamental infrastructure layer of public blockchain protocols exists the idea of “Decentralized Autonomous Organizations” (DAOs). Scholars define DAOs as a broad organizational framework that allows people to coordinate and self-govern, through rules deployed on a blockchain instead of issued by a central institution (Hassan & De Filippi, 2021; Nabben, 2021a). DAOs are novel institutional forms, that manifest for a variety of purposes and according to varying legal and organizational arrangements. This includes protocol DAOs that provide a foundational infrastructure layer, investment vehicles, service providers, social clubs, or a combination of these purposes (Brummer & Seira, 2022). The governance rules and processes of DAOs, as well as the degree to which they rely on technology and/or social processes, depends on the purpose, constitution, and members of a particular DAO. Governance in any decentralized system fundamentally relies on relationships between individuals in flat social structures, enabled through technologies that support connection and coordination without any central control (Mathew, 2016). Yet, as nascent institutional models, there are few formally established governance models for DAOs, and what does exist is a blend of technical standards, social norms, and experimental practices (significant attempts to develop in this direction include the ‘Gnosis Zodiac’ DAO tooling library and ‘DAOstar’ standard proposal (Gnosis, 2022; DAOstar, 2022)). DAOs are large-scale, distributed infrastructures. Thus, analogising DAO governance to Internet governance may provide models for online-offline stakeholder coordination, development, and scale.

The Internet offers just one example of a pattern for the development of large-scale, distributed, infrastructure development and governance. There exists a rich historical literature on the emergence of the Internet, the key players and technologies that enabled it to develop, and the social and cultural factors that influenced its design and use (Abbate, 2000; Mailland & Driscoll, 2017). Internet governance refers to policy and technical coordination issues related to the exchange of information over the Internet, in the public interest (DeNardis, 2013). It is the architecture of network components and global coordination amongst actors responsible for facilitating the ongoing stability and growth of this infrastructure (DeNardis & Raymond, 2013). The Internet is kept operational through coordination regarding standards, cybersecurity, and policy. As such, governance of the Internet provides a potential model for DAOs, as a distributed infrastructure with complex and evolving governance bodies and stakeholders.

The Internet is governed through a unique model known as ‘multi-stakeholder governance’. Multistakeholderism is an approach to the coordination of multiple stakeholders with diverse interests in the governance of the Internet. Multistakeholderism refers to policy processes that allow for the participation of the primary affected stakeholders or groups who represent different interests (Malcolm, 2008; 2015). The concept of multi-stakeholder governance is often associated with characteristics like “open”, “transparent”, and “bottom-up”, as well as “democratic” and “legitimate”. Scholar Jeremy Malcolm synthesizes these concepts into the following criteria:

1. Are the right stakeholders participating, referring to sufficient participants to present all the perspectives of all with a significant interest in any policy directed at a governance problem?

2. How is participation balanced refers to policy development processes designed to roughly balance the views of stakeholders, ahead of time, or by a deliberative democratic process in which the roles of stakeholders and the balancing of their views are more dynamic (but usually subject to a formal decision process)?

3. How is the body and its stakeholders accountable to each other for their roles refers to trust between host body and stakeholders, that the host body will take responsibility to fairly balance the perspectives of participants, and that stakeholders claim legitimate interest to contribute?

4. Is the body an empowered space refers to how closely stakeholder participation is linked to spaces in which mutual decisions are made, as opposed to spaces that are limited to discussion and do not lead to authoritative outcomes (2015)?

5. The fifth criterion, which I contribute in this piece is, is this governance ideal maintained over time?

In this essay, I employ a Science and Technology Studies lens and auto ethnographic methods to investigate the creation and development of a “Decentralized Autonomous Organization” (DAO) provocatively named “Friends With Benefits” (FWB) in its historical, cultural, and social context. Autoethnography is a research method that uses personal experiences to describe and interpret cultural practices (Adams, et. al., 2017). This autoethnography took place online through digital ethnographic observation in the lead-up to the event and culminated at “FWB Fest”. Fest was a first of its kind multi-day “immersive conference and festival experience at the intersection of culture and Web3” hosted by FWB in an Arts Academy in the woods of Idyllwild, two hours out of LA (FWB, 2022a). In light of the governance tensions between peer-to-peer economic models and private capital funding that surfaced, I explore how the Internet governance criteria of multistakeholderism can apply to a DAO as a governance model for decentralized coordination among diverse stakeholders. This piece aims to offer a constructive contribution to exploring how DAO communities might more authentically represent their values in their own governance in this nascent and emerging field. I apply the criteria of multistakeholder governance to FWB DAO as a model for meaningful stakeholder inclusion in blockchain community governance. Expositing my experiences of FWB Fest reveals the need for decentralized governance models on which DAO communities can draw to scale their mission in line with their values.

A Digital City

FWB started as an experiment among friends in the creative industries who wanted to learn about crypto. The founder of the DAO is a hyper-connected LA music artist and entrepreneur named Trevor McFedries. While traveling around the world as a full-time band manager, McFredies used his time between gigs to locate Bitcoin ATMs and talk to weird Internet people. Trevor ran his own crypto experiment by “airdropping” a made-up cryptocurrency token to his influencer and community-building friends in tech, venture capital, and creative domains and soon, FWB took off. McFredies is not involved in the day-to-day operations of the DAO but showed up at FWB Fest and was “blown away” at the growth and progress of the project. The FWB team realized the DAO was becoming legitimate as more and more people wanted to join during the DAO wave of 2021-2022. This was compounded by COVID-19, as people found a sense of social connection and belonging by engaging in conversations in Discord channels amidst drawn-out lockdown and isolation. When those interested in joining extended beyond friends of friends, FWB launched an application process. Now, the DAO has nearly 6,000 members around the world and is preparing for its next phase of growth.

FWB’s vision is to equip cultural creators with the community and Web3 tools they need to gain agency over their production by making the concepts and tools of Web3 more accessible, building diverse spaces and experiences that creatively empower participants, and developing tools, artworks, and products that showcase Web3’s potential. The DAO meets online via the (Web2) chat application ‘Discord’. People can join various special interest channels, including fashion, music, art, NFTs, and so on. To become a member, one must fill out an application, pass an interview with one of the 20-30 rotating members of the FWB Host Committee, and then purchase 75 of the native $FWB token at market price (which has ranged over the past month from approximately $1,000 USD to $10,000 USD). Membership also provides access to a token-gated event app called “Gatekeeper”, an NFT gallery, a Web3-focused editorial outlet called “Works in Progress”, and in-person party and festival events. According to the community dashboard, the current treasury is $18.26M (FWB, n.d.).

“FWB vision” by Fiona Carty.
“FWB vision” by Fiona Carty.

It appeared to me that the libertarian origins of Bitcoin as a public, decentralized, peer-to-peer protocol had metamorphosised into people wanting to own their own networks in the creative industries. The DAO has already made significant progress towards this mission, with some members finding major success at the intersection of crypto and art. One example is Eric Hu, whose generative AI butterfly art “Monarch” raised $2.5 million in presale funds alone (Gottsegen, 2021). “The incumbents don’t get it” stated one member. “They want to build things that other people here have done but “make it better”. They never will”.

The story of how I got to FWB Fest is the same as everybody else’s. I got connected through a friend who told me about the FWB Discord. I was then invited to speak at FWB Fest based on a piece I wrote for CoinDesk on crypto and live action role-playing (LARPing) - referring to educational or political purposes role-playing games with the purpose of awakening or shaping thinking (Nabben, 2021b; FWB, 2022b). The guiding meme of Fest was “a digital city, turns into an offline town”. In many ways, FWB Fest embodied a LARP in cultural innovation, peer-to-peer economies, and decentralized self-governance.

The infrastructure of the digital city is decentralized governance. The DAO provides something for people to coalesce around. It serves as a nexus, larger than the personal connections of its founder, where intersectional connections of creativity collide in curated moments of serendipity. Membership upon application provides a trusted social fabric that brings accountability through reputation to facilitate connections, creativity, and business. In this tribal referral network, “it’s amazing the connections that have formed with certain people, and it’s only going to grow” states core-team member, Jose. Having pre-verified friends’ scales trust in a safe and accessible way. “Our culture is very soft” stated Dexter, a core team member during his talk with Glen Weyl on the richness of computational tools and social network data. It is a gentle way to learn about Web3, where peoples’ knowledge and experience are at all levels, questions are okay, and the main focus is shared creative interests with just a hint of Web3.

The next plan for the DAO, as I found out, is to take the lessons learned from FWB Fest and provide a blueprint for members to host their own FWB events around the world and scale the impact of the DAO. These localizations will be based on the example set by the DAO in how to run large-scale events, secure sponsors, manage participation using Web3 tools, carry the culture and mission of FWB, and garner more members. In the words of core team member Greg, the concept is based on urban planner Christopher Alexander’s work on pattern languages, as unique, repeatable actions that formulate a shared language for re-creation of a space that is alive and whole (Alexander, 1964). Localising the cultural connections and influence the DAO provides offers a new dimension in the scale and impact of the DAO, states core team member Alex Zhang. FWB is providing the products and tooling to enable this decentralization through localization. Provisioning tools like the Gatekeeper ticketing app (built by core team member Dexter, a musician and self-taught software developer) provide a pattern to enable community members to take ownership of running their own events by managing ticketing in the style and culture of FWB.

Multiple Stakeholders Governing the Digital City

It wasn’t until my final evening of the Fest that I realized that FWB itself had raised $10M in VC capital at a $100M valuation from some of the biggest names in US Venture Capital, including Andreessen Horowitz and a16z. In the press release, VC firm a16z states “FWB represents a new kind of DAO…it has become the de facto home of web3’s growing creative class (2021). The capital is intended to scale the “IRL” footprint of the DAO through local events around the world called “FWB Cities.” “Crypto offers a dramatically more incentive-aligned way for creatives to monetize their passions, but we also recognize that the adoption hurdles have remained significant. FWB serves as the port city of web3, introducing a culturally influential class to crypto by putting human capital first”.

The raise was controversial for the community according to the discussions that occurred, community calls, and sentiment afterwards (although, this was not reflected in the outcomes of the vote, which passed at 98%). Some see it as the financialization of creativity. “All this emphasis on ownership and value. And I feel like I’m contributing to it by being here!” stated one LARPer at FWB Fest, who runs an art gallery IRL. If the rhizomatic, self-replicating, decentralization thing can work, then we all need to own it together. “Right now, it’s still a fucking pyramid.”

Crypto communities are at risk of experiencing the corruption of the ideal of decentralization. This has already been a hard lesson in Internet governance – which has undergone a trajectory from the early Internet of the 1980s and settling on the TCP/IP standard protocol, to regional networks and the National Science Foundation (NSF) taking on the Internet as NSFnet in the 1980s and early 1990s, to privatization of the Internet under the Clinton Administration in the mid-1990s and sale of important elements to corporations such as Cisco Systems, to the rise of big tech giants with significant political influence and platform dominance by Microsoft, Google, Apple, and Facebook (Abbate, 2000; Tarnoff, 2022). Infrastructure is complex and fraught with the dynamics of power and authority (Winner, 1980). It is difficult to operate counter to the culture you come from without perpetuating it. If Web2 governance and capital allocation strategies are being perpetuated instead of new ones that facilitate the values of Web3, this has a direct effect on decentralized governance and community participation.

This DAO community, like many others, hasn’t yet figured out decentralized governance. For its next phase of growth and mission to empower its constituency, it has to. So far, the community remained successfully intact, or “unforked”. Yet, “progressive decentralization” through the localisation of events is not the same as meaningful empowerment to govern the organization. Any DAO's goal and incentives should not be to exit a start-up, especially not a social DAO. To quote one main stage speaker, Kelani from eatworks.xyz, “The artist's goal is to misuse technology. It’s a subversive outcome”. DAOs come from political origins and are about developing infrastructure to facilitate countercultural social movements (Nabben, 2022). In this case, to subvert existing capital models and create an innovation flywheel for peer-to-peer production in sustainable ways. In the domain of creativity, even failure equals progress and a “victory for art”.

The animating purpose of FWB DAO is to allow people to gain agency by creating new economies and propagating cultural influence. Yet, they have resorted to traditional venture capital models to bootstrap their business. However, the purpose of creating opportunities for new economic models must carry through each localisation, whilst somehow aligning members with the overarching DAO. The concept of multi-stakeholder governance offers a pattern for how to design for this.

FWB Core Team Member @JoseRMeijia on Twitter: “This is the way” (2022).
Source: FWB newsletter. July, 2022.

Applying the Criteria of Multi-stakeholder Governance to the Digital City

The principles that stakeholders adhere to in the governance of the Internet is one place to look for a historical example of how distributed groups govern the development and maintenance of distributed, large-scale infrastructure networks. Multistakeholderism acknowledges the duplicity of actors, interests, and political dynamics in the governance of large-scale infrastructures and the necessity of meaningful stakeholder engagement in governance across diverse groups and interests. This allows entities to transform controversies, such as the VC “treasury diversification” raise, into productive dialogue which positions stakeholders in subsequent decision-making for more democratic processes (Berker, et. al., 2011). In the next section of this essay, I apply the criteria of meaningful multi-stakeholder governance as articulated by Malcolm (2015) to FWB DAO, as a potential model in helping the DAO balance stakeholder interests and participation as it diversifies and scales.

  1. Are the right stakeholders participating?

The right stakeholders to be participating in FWB DAO include all perspectives with significant interest in creating DAO policies or solving DAO problems. This includes core team members employed by the DAO, long-term as well as newer members, and investors. This requires structural and procedural admission of those who self-identify as interested stakeholders (Malcolm, 2015).

  1. How is their participation balanced?

In the community calls where FWB members got to conduct Q&A with their newfound investors, the VCs indicated their intention to ‘delegate’ their votes to existing members, but to whom remains unclear. There must be mechanisms to balance the power of stakeholders to facilitate them reaching a consensus on policies that are in the public interest (Malcolm, 2015). FWB does not yet have this in place (to my knowledge, at the time of writing). This can be achieved through a number of avenues, including prior agreement of the unique roles, contributions, expertise, and resource control of certain stakeholders, or deliberative processes that flatten hierarchies by requiring stakeholders to defend their position in relation to the public interest (Malcolm, 2015). Some decentralized communities have also been experimenting with governance models and mechanisms that are relevant in evolving governance beyond ‘yes’ - ‘no’ voting. One example of this is the use of “Conviction Voting” to signal community preference over time and pass votes dynamically according to support thresholds (Zargham & Nabben, 2022).

  1. How are the body and its stakeholders accountable to each other for their roles in the process?

FWB DAO is accountable to its members for the authority it exercises as a brand and an organization. Similarly, through localised events, participants are accountable for legitimately representing the FWB brand, using its tools (such as the Gatekeeper ticketing app), and acquiring new members that pay their dues back to the overarching DAO. Mechanisms for accountability include if stakeholders accept the exercise of the authority of the host body, the host body operating transparently and according to organizational best practices, as well as stakeholders actively participating according to their roles and responsibilities (Malcolm, 2015).

  1. Is the body an empowered space?

For multistakeholder governance to ensue, the host body must meaningfully include stakeholders in governance processes, meaning that stakeholder participation is linked to spaces in which definitive decisions are made and outcomes are reached, rather than just deliberation or expression of opinion (Malcolm, 2015).

At present, participation in FWB DAO governance is limited, at best. Proposals are gated by team members who help edit, shape, and craft the language according to a template before it can be posted to Snapshot by the Proposal Review Committee. Members can vote on proposals, with topics including “FWB x Hennessy Partnership,” grant selections, and liquidity management. According to core team members in their public talks, votes typically pass with 99% in favor every time, which is not a good signal of genuine, political engagement and healthy democracy.

  1. Is this governance ideal maintained over time?

A criterion missing in the current principles on multistakeholderism is how the ideals of decentralized governance can persist over time. It is widely acknowledged that the Internet model of governance is not congruent with the initial values of some for a ‘digital public’ that has become privatized, monetized, and divisive. These inner power structures controlled by private firms and sovereign States permeate the architectures and institutions of Internet governance (DeNardis, 2014). Some argue that this corrupted ideal over time can be addressed by deprivatizing the Internet to re-direct power away from big tech firms and towards more public engagement and democratic governance (Tarnoff, 2022). In reality, both privatized network governance models and public ones can be problematic (Nabben, et. al., 2020). The promise of a social DAO, and crypto communities more broadly, is innovation in decentralized governance, to be able to make technical and political guarantees of certain principles.

The ideals of public, decentralized blockchain communities are at risk of following a similar trajectory to the Internet. What began with grassroots activism against government and corporate surveillance in the computing age (Nabben, 2021a), could be co-opted by the interests of big money, government regulation, and private competition (such as Central Bank Digital Currencies, Facebook’s ‘Meta’, Visa and Amex, etc). For FWB to avoid this trajectory of enthusiastic early community to a centralized concentration of power, a long-term view of governance must be taken. This commands deeper consideration and innovation towards pattern language for decentralized governance itself.

Conclusion

Experiencing the governance dynamics of a social DAO surfaces some of the challenges of coordinating the governance and scaling distributed infrastructure that blends multi-stakeholder, online-offline dynamics with the values of decentralization. The goal of FWB DAO is to allow people to gain agency through the creation of new economies that then propagate through cultural influence. This goal must carry through each localization and somehow align back to the overarching DAO as the project scales to create not just culture but to further the cause of decentralization. What remains to be seen is how this creative community can collectively facilitate authentic, decentralized organizing for the impassioned believers through connections, tools, funding, and creative ingenuity on governance itself. Without incorporating the principles of meaningful multistakeholder inclusion in governance, DAOs risk becoming ‘a myth of decentralization’ (Mathew, 2016) that are riddled with power concentrations in practice. The principles of multi-stakeholderism from Internet governance offer one potentially viable set of criteria to guide the development of more meaningful decentralized governance practices and norms. Yet, multistakeholder governance is intended to balance public interests and political concerns in particular contexts, not as a model for all distributed governance functions (DeNardis & Raymond, 2013). Thus, the call to Decentralized Autonomous Organizations is to leverage the insights of existing governance models, whilst innovating their own principles and tools to continue exploring, applying, and testing governance models and authentically pursue their aims.


Bibliography

A16Z. (2021). “Investing in Friends With Benefits (a DAO). Available online: https://a16z.com/2021/10/27/investing-in-friends-with-benefits-a-dao/. Accessed October, 2022.

Abbate, J. (2000). Inventing the Internet. MIT Press, Cambridge.

Adams, T. E., Ellis, C., & Jones, S. H. (2017). Autoethnography. In The International Encyclopedia of Communication Research Methods (pp. 1–11). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118901731.iecrm0011.

Alexander, C. 1964. Notes on the Synthesis of Form (Vol. 5). Harvard University Press.

Brummer, C J., and R Seira. (2022). “Legal Wrappers and DAOs”. SSRN. Accessed 2 June, 2022. http://dx.doi.org/10.2139/ssrn.4123737.

Berker, T. Michel Callon, Pierre Lascoumes and Yannick Barthe, “Acting in an Uncertain World: An Essay on Technical Democracy”. Minerva 49, 509–511 (2011). https://doi.org/10.1007/s11024-011-9186-y.

DAOstar. (2022). “The DAO Standard”. Available online: https://daostar.one/c89409d239004f41bd06cb21852e1684. Accessed October, 2022.

DeNardis, L. (2013). “The emerging field of Internet governance”. In W. H. Dutton (Ed.), The Oxford handbook of Internet studies (pp. 555–576). Oxford, UK: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199589074.013.0026.

DeNardis, L. (2014). The Global War for Internet Governance. Yale University Press: New Haven, CT and London.

DeNardis, L. and Raymond, M. (2013). “Thinking Clearly About Multistakeholder Internet Governance”. GigaNet: Global Internet Governance Academic Network, Annual Symposium 2013, Available at SSRN: https://ssrn.com/abstract=2354377 or http://dx.doi.org/10.2139/ssrn.2354377.

Epstein, D., C Katzenbach, and F Musiani. (2016). “Doing Internet governance: practices, controversies, infrastructures, and institutions.” Internet Policy Review.

FWB. (2022a). “FWB Fest 22”. FWB. Available online: https://fest.fwb.help/. Accessed October, 2022.

FWB. (2022b). “Kelsie Nabben: What are we LARPing about? | FWB Fest 2022”. YouTube (video). Available online: https://www.youtube.com/watch?v=UUoQ-sBbqeM. Accessed October, 2022.

FWB (n.d.). “Pulse”. FWB. Available online: https://www.fwb.help/pulse. Accessed October, 2022.

Gnosis. (2022). “Zodiac Wiki”. Available online: https://zodiac.wiki/index.php/ZODIAC.WIKI/. Accessed October, 2022.

Gottsegen, W. (2021). Available online: https://www.coindesk.com/tech/2021/10/07/designer-eric-hu-on-generative-butterflies-and-the-politics-of-nfts/. Accessed October, 2022.

Hassan, S., and P. De Filippi. (2021). "Decentralized Autonomous Organization." Internet Policy Review 10, no. 2:1-10.

Jose Meijia, (@JoseRMeijia). (2022). [Twitter]. “This is the way”. Available online: https://twitter.com/makebrud/status/1556691400367824896. Accessed 1 October, 2022.

Mailland, J. and K. Driscoll. (2017). Minitel: Welcome to the Internet. MIT Press, Cambridge.

Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Wembley, WA: Terminus Press.

Malcolm, J. (2015). “Criteria of meaningful stakeholder inclusion in Internet governance.” Internet Policy Review, 4(4). https://doi.org/10.14763/2015.4.391.

Mathew, A. J. (2016). “The myth of the decentralised Internet.” Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.425.

Nabben, K. (2021a). “Is a "Decentralized Autonomous Organization" a Panopticon? Algorithmic governance as creating and mitigating vulnerabilities in DAOs.” In Proceedings of the Interdisciplinary Workshop on (de) Centralization in the Internet (IWCI'21). Association for Computing Machinery, New York, NY, USA, 18–25. https://doi/10.1145/3488663.3493791.

Nabben, K. (2021b). “Infinite Games: How Crypto is LARPing”. CoinDesk. Available online: https://www.coindesk.com/layer2/2021/12/13/infinite-games-how-crypto-is-larping/. Accessed October, 2022.

Nabben, K. (2022). “A Political History of DAOs”. FWB WIP. Available online: https://www.fwb.help/editorial/cypherpunks-to-social-daos. Accessed October, 2022.

K. Nabben, M. Poblet and P. Gardner-Stephen. "The Four Internets of COVID-19: the digital-political responses to COVID-19 and what this means for the post-crisis Internet," 2020 IEEE Global Humanitarian Technology Conference (GHTC), (2020). pp. 1-8, doi: 10.1109/GHTC46280.2020.9342859.

Tarnoff, B. (2022). Internet for the People: The Fight for Our Digital Future. Verso Books: Brooklyn.

Winner, L. (1980). “Do Artifacts Have Politics?” Daedalus, 109(1), 121–136. Retrieved from http://www.jstor.org/stable/20024652.

Zargham, M., and K Nabben. (2022). “Aligning ‘Decentralized Autonomous Organization’ to Precedents in Cybernetics”. SSRN. Accessed June 2, 2022. https://ssrn.com/abstract=4077358.

 

Kelsie Nabben. (November 2022). “Decentralized Governance Patterns: A Study of "Friends With Benefits" DAO.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 86-101.


About the Author: Kelsie Nabben is a qualitative researcher in decentralised technology communities. She is particularly interested in the social implications of emerging technologies. Kelsie is a recipient of a PhD scholarship at the RMIT University Centre of Excellence for Automated Decision-Making & Society, a researcher in the Blockchain Innovation Hub, and a team member at BlockScience.


  

The Robots Are Among Us and 2062: The World that AI Made
Figure 1: The two books under examination. Picture taken by the author.

I want to share an anecdote. My doctoral fieldwork consisted of a mixed historical analysis and interview-based research on artificial intelligence (AI) promises and expectations. I have been attending numerous talks on AI and robotics while I have been frequently posting on social media about interesting material I encountered during my doctoral investigations. On July 15th 2018, I received a generous gift by mail, sent by a very kind Instagram user named Chris Ecclestone, who, after a brief online chat about my PhD through the platform’s messaging utility, insisted he had to send me something he found at his local charity shop (the charity-oriented UK equivalent of a thrift store/second-hand shop). The book’s title was The Robots Are Among Us, authored by a certain Rolf Strehl, published in 1955 by the Arco Publishing Company.

I was only able to find very limited information about Strehl – the most comprehensive information available online comes from a blogpost written by workers at the computer museum Heinz Nixdorf. From this, we learn, with the aid of online translation from German, that “he was born in Altona in 1925 and died in Hamburg in 1994,” that while writing this book “he was editor-in-chief of the magazine ‘Jugend und Motor’” (‘Youth and Motor,’ a popular magazine about automobiles), and that the book comes with a “number of factual errors” and “missing references.” According to the same website, the original 1952 German version of Die Roboter Sind Unter Ins (Gerhard Stalling Verlag, Oldenburg) was among the first two nonfiction books written about robots and intelligent machines in German, translated into several languages. A quick Google Images search proved that, in addition to my copy of the English translation, the book was also published, with slightly modified titles, in several other languages too: In Spanish (Han Llegado Los Robots – Ediciones Destino, Barcelona), Italian (I Robot Sono Tra Noi – Bompiani Editore, Milan), and French (Cervaux Sans Âme: Les Robots – Editions et publications Self, Paris). This suggests that the book was considered by several international publishers to be credible enough for wide circulation, and as the English version’s paper inlay states, the book “is written with a minimum of technical jargon. It is written for the layman [sic]. It is a scientific book, but it is a sensational book: for it illuminates for us the shape of things to come”;  one has to note the use of the word “sensational” which in current debates about public portrayals of AI, it is mostly used as a derogatory term, implying distance from technical legitimacy).

Thus, I suggest that the book deserves excavation being indicative of the mid-1950s promissory environment around thinking machines, prior to the coinage of the term AI, although the English translation overlaps with the year the term was coined (more below).

On July 9th, 2019, almost a year since I received Strehl’s book, I attended a talk at the University of Edinburgh by Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales. Walsh, whose doctoral degree was obtained in Edinburgh, presented portions of his 2018 book 2062: The World that AI Made, which I acquired and read after the event. In contrast to Strehl’s rather obscure biographical notes, Walsh’s work is well documented on his personal website. In addition to his AI specialisation in constraint programming, Walsh’s work involves policy advising in building trustworthy AI systems as well as lots of public outreach through popular media.

The book, published in English by Black Inc./ La Trobe University Press, is of similar magnitude to Strehl’s, given that it has been translated widely: In German (2062: Das Jahr, in Dem Die Künstliche Intelligenz Uns Ebenbürtig Sein Wird, Riva Verlag, Munich), Chinese (2062:人工智慧創造的世界經濟新潮社, Taipei), Turkish (2062: Yapay Zeka Dünyası – Say Yayınları, Ankara), Romanian (2062: Lumea Creata De Inteligenta Artificiala – Editura Rao, Bucharest), and Vietnamese (Năm 2062 -Thời Đại Của Trí Thông Minh Nhân Tạo – NXB Tổng Hợp TP. HCM, Ho Chi Minh City).. Taking numbers of translations as an indication of magnitude, I suggest that Walsh’s book can be classified as somewhat comparable to Strehl’s, given that as it is mentioned on his website, it is “written for a general audience.” Thus, regardless the different degrees of AI expertise and respectful contexts of their authors, I suggest these books can be contrasted as end-products indicating AI hype in 1955 and 2018. I hereby aim to recreate my personal experience with discovering the similarities between the two books.

Toby Walsh
Figure 2: Toby Walsh, Photo Credit CC-BY 4.0. Nadja Meister. (link: https://www.flickr.com/photos/vclatuwien/29968223557/)

I now invite the reader to take a look through the tables of contents displayed at the end of this essay, upon which I will now comment. Strehl’s book contents have been scanned from the original whereas Walsh’s expanded book contents have been collated in a way to resemble Strehl’s for ease of comparison. (A note on presentation: As the reader will notice, Strehl’s chapters are followed by detailed descriptions of the chapters’ sections, very typical of books from that era. Walsh’s original table of contents includes only the main headings, although within the book similar sections to Strehl’s are designating sub-chapters. I have manually copied these sub-heading structure on the table below in lieu of scannable content.) Notice similarities on both books’ first chapters, between “the failure of the human brain – the machine begins to think – ‘Homunculus’ is born – the beginning of a new epoch” (Strehl 1955) and “Machines That Learn – Computers Do More Than They are Told – The Machine Advantage – Our Successor” (Walsh 2018).

Both books’ second chapters review technological advances of machine intelligence of their times: Strehl describes the abilities of early computing and memory storing machines ENIAC, Mark III, and UNIVAC, as well as the possibility of “automatic weapons.” Meanwhile, Walsh describes recent breakthroughs in game-playing such as Go, although his chapter 0005 is entirely dedicated to “killer robots,” “weapons of terror,” and “of error,” pretty much like Strehl’s penultimate chapter “The Beginning of the Future War” which contains sections like “Robots become soldiers,” “mechanical brains take over command.” (Interestingly, Walsh does not refer to any cases of factory worker accidents by robotic technologies, however Strehl mentions two cases of lethal robotic accidents on this chapter’s section “A Robot murders its Master” similar to newspaper headlines about robotic killers (for example, Huggler 2015, McFarland 2016, or Henley 2022).

Strehl’s third chapter asks, “Can the Robot Really Think?” in the same way that Walsh asks, “Should We Worry?” Both authors enquire on “The Age-Old Dream of an Artificial Human Being” (Strehl) and “Artificial General Intelligence – How Long Have We Got? – The Technological Singularity” (Walsh); and again, both refer to the question of “free will” in machines (Strehl: “Free will of the Machine?”; Walsh: “Zombie Intelligence […] The Problem of the Free Will”). Strehl dedicates two chapters to job displacement, “The Second Industrial Revolution,” focusing on industrial robotic technologies (“the Robots are in control – machine automatons replace worker battalions – Man [sic] is left out […] the factory without people”) and “The Dictatorship of the Automaton” mostly focusing on automation technologies conceptually similar to AI (“the automatic secretary […] the telephone Robot listens attentively – Robots keep books conscientiously – Robots sort telegrams […] the Robot as master detective […] the whole of mankind [sic] is filed – Robot salesmen [sic] in the department store […] divorce by ‘automatic’ court decision”). Although today’s equivalents (robot assistants like Alexa or Echo, robotic “judges,” and concerns about data surveillance) are much more technologically advanced, the sentiment captured in Strehl’s book is strikingly similar to several sections on Walsh’s: “The Real Number of Jobs at Risk, “Jobs Only Partly Automated – Working Less” (on the dangers of job automation), “Machine Bias – The Immoral COMPAS – Algorithmic Discrimination” (on the cases of automated decision making as in the robotic judge COMPAS), and “AI is Watching You – Linked Data” (on the case of surveillance).

The Robots Are Among Us toc
Figure 3: The Robots Are Among Us Table of contents image of the 'Modern Man in the wheels of technique.'

By this point, it has become sufficiently clear that concerns about automation technologies which in different times can be termed as “AI” or “robots” (or different regional and research contexts; consider the “I’m not a robot” captcha version of a Turing test) have been sustained in a surprisingly similar degree of comprehension. It should be interesting to note some differences between the two books. First, it is useful to question how the authors gain what we might perceive as their promissory credibility, that is, the right to speculate about a new form of reality which is about to come. As already mentioned, Strehl falls short in terms of references – however, he sets out to clarify that the content presented is realistic: “This book is not about Utopia. It is a factual report of the present time collected from hundreds of sources. Nevertheless, throughout his book, Strehl refers to warnings about machine intelligence expressed by pioneering minds in the field, often citing cyberneticist Norbert Wiener, but also mathematician Alan Turing, and others. Walsh’s approach is stricter, methodologically speaking, matching contemporary standards:

“In January 2017, I asked over 300 of my colleagues, all researchers working in AI, to give their best estimate of the time it will take to overcome the obstacles of AGI. And to put their answers in perspective, I also asked nearly 500 non-experts for their opinion. […] Experts in my survey were significantly more cautious than the non-experts about the challenges of building human-level intelligence. For a 90 per cent probability that computers match humans, the median prediction of the experts was 2112, compared to just 2060 among the non-experts [...] For a 50 per cent probability, the median prediction of the experts was 2062. That’s where the title of this book comes from: the year in which, on average, my colleagues in AI expect humankind to have built machines that are as capable as humans.” (AGI stands for Artificial General Intelligence, that is, the hypothesis that AI might be reaching or surpassing human intelligence, for example Goertzel 2014.)

Although the two authors exhibit different strengths in showing their research skills, they both rely on the credence of external sources to sustain their argument. Moreover, they agree on the possibility of a rather inevitable new form of world which is, in part, already here, and will invite humanity to think of new forms of living in the nearby future. Their difference is in their degree of optimism. Strehl agrees with Walsh that machines will always remain in need of human controllers, however, suggests that machines will take control in a subtler way:

“Man [sic] will try to maintain his [sic] supremacy because the machines will always be limited creatures, without imagination and consciousness, incapable of inventiveness outside their own limits. But this supremacy of Man [sic] will only be an illusion, because the machines will have become so indispensable in an unimaginable mechanization of the technical civilization of the future that they will have become the rulers of this world, grown numb through technical perfection. The future mechanized order of society will not be able to continue existing without constant supervision of the thinking machines by their human creators. But the machines will rule.”

The following, more optimist, passage by Walsh can be read as a hypothetical response to Strehl:

“But by 2062 machines will likely be superhuman, so it’s hard to imagine any job in which humans will remain superior to machines. This means the only jobs left will be those in which we prefer to have human workers.”

Walsh then refers to the emerging hipster culture characterised by appreciation of artisan jobs, craft beer and cheese, organic wine, and handmade pottery.

One should not forget that Walsh’s public outreach on AI extends in part from his lens as an AI researcher. His book is one that admits challenges, but also offers hopeful perspectives. Strehl’s book is written in a rather polemic fashion, although admitting the author’s fascination about the technical advancements; yet it is written by an outsider who has probably not built any robots, at least as much as Walsh has developed algorithms. This difference of balance, small doses of warning followed by hopeful promising (Walsh) is opposed to small doses of excitement followed by dystopian futurism (Strehl). It is telling of the existence of the expectational environment of AI which evolves at least since the second half of the 20th century, with its roots in the construction of early automata, as well as in mythology, religion, and literature.

Strehl’s book can be classified as indicative of broader narratives circulating which might have influenced decisions within the domain of practice, although it is difficult to find evidence and make robust assumptions concerning ways in which such broader public narratives about robots, thinking machines, and how electronic brains have influenced the practical direction of research. Walsh’s book can be classified as a product of internal research practices and strategies, aimed at influencing broader narratives (the book’s popularity might be considered as evidence of some sort). The mutual themes between the books prove that the field (or vision) of intelligent machines (hereby examined as AI) is at the same time broad, yet recognisable and limited in its various instantiations, from automated decision-making to autonomous vehicles.

In this essay, I do not want to make another claim about history repeating itself and the “wow” effect of hype-and-disillusionment cycles – belief in a purely circular history is as reductionist as the belief in the modernist notion of linear progress and innovation. This instance of repetition of themes is not a call for a same old-same old caution that AI warnings are of no value because humanity’s previous experience proved so. It is, however, a call to raise awareness about hype, sensitivity about sensationalism, and to treat products of mass consumption about science and technology as artefacts produced by specific and variable social contexts on the micro-scale (such as institutional agendas) and rather generalised and constant aspects of psychological patterns on the macro scale: hope and fear. In 1980, Sherry Turkle concluded that the blackbox structure of computers, invoke to their users different projected versions of their optimism or pessimism, thus resembling inkblot tests; she thus treated “computers as Rorschach.” 42 years after Turkle’s paper, computers and robots have evolved a lot – however, despite numerous calls for explainable AI systems, nothing prevents us from treating “AI as Rorschach” as well as “robots as Rorschach.” This might amount to a creative and therapeutic endeavour in our experience with AI.

Robots among us toc image toc image 2
Figure 4:  Rolf Strehl, The Robots Are Among Us table of contents.
Robots are among us ToC sub-sections
Figure 5 Toby Walsh, 2062: The World That AI Made, table of contents, incl. sub-sections not available on the printed table of contents page.

Bibliography

Goertzel, Ben. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1-46.

Heinz Nixdorf MuseumsForum (2017). Die Roboter Sind Unter Ins. Blog post. November 7, 2017. Retrieved 18-06-2021 from: https://blog.hnf.de/die-roboter-sind-unter-uns/

Henley, Jon (2022, July 24). Chess robot grabs and breaks finger of seven-year-old opponent. The Guardian. 24-07-2022. https://www.theguardian.com/sport/2022/jul/24/chess-robot-grabs-and-breaks-finger-of-seven-year-old-opponent-moscow

Huggler, Justin. (2015, July 2). Robot Kills Man at Volkswagen Plant in Germany. The Telegraph. Retrieved 3-07-2015 from http://www.telegraph.co.uk/news/worldnews/europe/germany/11712513/Robot-kills-man-at-Volkswagen-plant-in-Germany.html

McFarland, Matt. (2016, July 11). Robot’s Role in Killing Dallas Shooter is a First. CNN Tech. Retrieved 29-04-2017 from http://money.cnn.com/2016/07/08/technology/dallas-robot-death/index.html

Strehl, Rudolf. (1952 [1955]). The Robots are Among Us. London and New York: Arco Publishers.

Turkle, Sherry. (1980). Computers as Rorschach: Subjectivity and Social Responsibility. Bo Sundin (ed.). Is the Computer a Tool? Stockholm. Almquist and Wiksell. 81–99.

Walsh, Toby. (2018). 2062: The World that AI Made. Carlton: La Trobe University Press, Black Inc.

Walsh, T. (2021, July 20th). Personal website. UNSW Sydney, accessed 20, July 2021, <http://www.cse.unsw.edu.au/~tw/,>

 

Vassilis Galanos (October 2022). “Longitudinal Hype: Terminologies Fade, Promises Stay – An Essay Review on The Robots Are Among Us (1955) and 2062: The World that AI Made (2018).” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 73-87.


About the Author: Vassilis Galanos (it/ve/vem) is a Teaching Fellow and Postdoctoral Research Associate at the University of Edinburgh, bridging STS, Sociology, and Engineering departments and has co-founded the local AI Ethics & Society research group. Vassilis’s research, teaching, and publications focus on sociological and historical research on AI, internet, and broader digital computing technologies with further interests including the sociology of expectations, studies of expertise and experience, cybernetics, information science, art, invented religions, continental and oriental philosophy. 


 

christian-erfurt
Photo by Christian Erfurt on Unsplash.

Recently, the term “Quiet Quitting” has gained prominence in social media by employees who are changing their standards about work, and by business leaders who are concerned about the implications of this change of attitude and expectations at the workplace. The term initially started trending in social media with the posts from employees sharing their perspective. These employees are vocal about changing the standards of achievement and success at work, especially when work and home boundaries are no longer clear.

Quiet Quitting is a call from employees who still value their work but also wanted to feel valued and trusted in return. This is a call from those whose work and personal life is not balanced and who are looking for a healthier way to set boundaries. This is a reaction to the changes caused by the pandemic which allowed some employees to work from home, but which also further blurred the lines between work and home space. This is about corporations finding a multitude of ways to ensure their employees are connected to work around the clock, and is about the workers not wanting to be available to their employers for time for which they are not compensated, or work for which they are not recognized. It should not be a reason to criticize, shame, scare or surveil employees.

The pandemic caught many organizations unprepared for a sudden shift to remote work arrangements. Employers who were worried about the performance levels of their now-remote workers implemented several measures – some more privacy-invading than others. Unfortunately, for many companies, the knee-jerk reaction was to implement employee monitoring (or surveillance) software, sometimes referred to as ‘bossware’. Vendors selling this software tend to pitch their products as capable of achieving one or more of the following: ‘increase in productivity/performance; prevention of corporate data; prevention of insider threat; effective remote worker management; data-based decision-making on user behavior analytics; sentiment analysis to identify flight-risk employees.” The underlying assumptions of this thread of functions are:

  • employees cannot be trusted and left to do what they are hired to do;
  • human complexity can be reduced to some data categories; and
  • a one-size-fits-all definition of productivity exists, and the vendor’s definition is the correct one.

In response to employees who suggest they will only do what they are hired to do and not more until the expectations are changed, AI-based employee surveillance systems are now being discussed as a possible solution to those ‘Quiet Quitting’. Employee surveillance was never a solution for creating equitable work conditions or increasing performance in a way which respected the needs of employees. It certainly cannot be a solution to the demands of workers trying to stay physically and mentally healthy.

 

christian-erfurt
Photo by Chris Yang on Unsplash. ​​​​

The timing of tasks and employee activity monitoring in assembly lines and warehouses goes back to the times of Winslow Taylor. Taylorism aimed to increase efficiency and production and eliminate waste. It was also based on the “assumptions that workers are inherently lazy, uneducated, and are only motivated by money.” Taylor’s approach and practice has been brought to its contemporary height by Amazon with its minute-by-minute tracking of employee activity and termination decisions made by algorithmic systems. Amazon uses such tools as navigation software, item scanners, wristbands, thermal cameras, security cameras and recorded footage to surveil its workforce in warehouses, delivery trucks and stores. Over the last few years, employee surveillance practices have been spreading into white and pink-collar work too.

According to a recent report by The New York Times, eight of the ten largest private U.S. employers track the productivity metrics of individual workers, many in real time. The same report details how employees described being tracked as “demoralizing,” “humiliating” and “toxic” and that 41% of employees reporting nobody in their organization communicates with them about what data is collected and why or how it’s being used. Another 2022 report by Gartner shows the number of large employers using tools to track their workers doubled since the beginning of the pandemic to 60%, with this number expected to rise to 70% within the next three years.

carl-heyerdahl
Photo by Carl Heyerdahl on Unsplash.

Employee surveillance software is extensive in its ability to capture privacy-invading data and make spurious inferences regarding worker performance. The technology can log keystrokes or mouse movements; analyze calendar activity of employees; screen emails, chat messages or social media for both the activity intervals and content; take screenshots of the monitor at random intervals; analyze which websites employee has visited and for how long; force activations of webcams; and monitor the terms searched by the employee. As an article in The Guardian on AI-based employee surveillance tools explains, the concerns regarding the use of these products range from the obvious privacy invasion in one’s home to reducing workers, their performance, and bodies to lines of code and flows of data which are scrutinized and manipulated. Systems which automatically classify a worker’s time into “idle” and “productive” reflect the value judgments of their developers about what is and is not productive. An employee spending time at a colleague’s desk explaining work or mentoring them for better productivity, can be labeled by the system as “idle”.

Even though natural language processing is not capable of understanding context, nuance or intent of language, AI tools which analyze the content and tone of one’s emails, chat messages or even social media posts ‘predict’ if a worker is a risk to the company. Forcing employees who work from home to keep their camera on at all times can lead to private and protected information of the employee to be disclosed to the employer. Furthermore, these systems remove basic autonomy and dignity at the workplace. They force employees to compete rather than cooperate and think of ways to game the system rather than thinking of more efficient and innovative ways to do their work. A CDT report focuses on how bossware can harm workers’ health and safety by discouraging and even penalizing lawful, health-enhancing employee conduct, enforcing a faster work pace and reduce downtime, which increases the risk of physical injuries, and increasing risk of psychological harm and mental health problems for workers.

Just as employee surveillance cannot replace trusting and transparent workplace relationships, it cannot be a solution to Quiet Quitting. Companies implementing such systems do not understand the fundamental reasons of this call. The reasons for such a call are not universal and there is no single solution for employers. The responses may change from fairer compensation to better communication practices to investment into employee’s skills to setting boundaries between work and personal life. Employers need to create space for open communication and understand the underlying reasons for frustration and the call for change. Employees need to ‘hear’ what their employees are telling them, not surveil.

==============================

Disclosure: The author also provides capacity building training and consulting to organizations for AI system procurement due diligence, responsible design, and governance. Merve Hickok is a certified Human Resources (HR) professional with 20 years of experience, an AI ethicist and AI policy researcher. She has written extensively about difference sources of bias in recruitment algorithmsimpact on employers and vendorsAI governance methods; provided public comments for regulations in different jurisdictions (New York City Law 144California Civil Rights CouncilWhite House Office of Science and Technology RFI), co-crafted policy statements (European Commission) and contributed to drafting of audit criteria for audit of AI systems (ForHumanity), and has been invited to talk in a number of conferences, webinars and podcasts on AI and recruitment, HR technologies and impact on candidates, employers, businesses and future of work.; was interviewed by both HR professional organizations (SHRM NewsletterSHRM opinion pieces) and by newspapers (The Guardian) about her concerns and recommendations.


Bibliography

Bose, Nandita (2020). “Amazon's surveillance can boost output and possibly limit unions.” – study. Reuters, August 31.

Corbyn, Zoe (2022). “‘Bossware is coming for almost every worker’: the software you might not realize is watching you.” The Guardian, April 27.

Kantor J, Sundaram A, Aufrichtig A, Taylor R. (2022). “The Rise of the Worker Productivity Score.” New York Times, August 14.

Schrerer, Matt and X. Z. Brown, Lydia (2021). “Report – Warning: Bossware May Be Hazardous to Your Health.” CDT, July 24. Turner, Jordan (2022). “The Right Way to Monitor Your Employee Productivity.” Gartner. June 09.

Williams, Annabelle (2021). “5 ways Amazon monitors its employees, from AI cameras to hiring a spy agency.Business Insider, April 5. 

Wikipedia. Digital Taylorism. https://en.wikipedia.org/wiki/Digital_Taylorism.

 

Merve Hickok (September 2022). “AI Surveillance is Not a Solution for Quiet Quitting.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 65-72.


About the Author: Merve Hickok is the founder of AIethicist.org. She is a social researcher, trainer, and consultant, with work intersecting AI ethics, policy and regulation. She focuses on AI bias, impact of AI systems on fundamental rights, democratic values, and social justice. She provides consultancy and training services to private & public organizations on Responsible AI, to create awareness, build capacity, and advocate for ethical and responsible development, use and governance of AI. Merve is also Data Ethics Lecturer at University of Michigan, and Research Director at Center for AI & Digital Policy. The Center educates AI policy practitioners and advocates across ~50 countries and leads a research group which advises international organizations (such as European Commission, UNESCO, Council of Europe, etc) on AI policy and regulatory developments. Merve also works with several non-profit organizations globally to advance both the academic and professional research in this field for underrepresented groups. She has been recognized by a number of organizations - most recently as one of the 100 Brilliant Women in AI Ethics™ – 2021, and as Runner-up for Responsible AI Leader of the Year - 2022 (Women in AI). Merve was previously a VP at Bank of America Merrill Lynch, and held various senior Human Resources roles. She is a Senior Certified Professional by SHRM (Society of Human Resources Management).


 

Cover of Ebony magazine featuring African American boy and his computer circa 1969.
Figure 1: Robert Dodson who was featured as the 'Computer in Whiz Kid' in the December 1969 issue of Ebony Magazine.

“The Computer Whiz Kid

The ability to comment, like, and share stories is a powerful feature of our digital world. Small stories online can gain big attention through the conversations they inspire. However, in the 1970s, long before influencers and their platforms of choice, the internet and publicly used digital networking systems, Anita Taylor wrote a letter. Her “letter to the editor,” a short note of less than 100 words was published in the March 1970 issue of Ebony Magazine

I have read the article Computer Whiz Kid (December 1969) numerous times. It is so encouraging to hear of a black youth achieving such high goals, especially for those of us who live in the Deep South. It’s this sort of inspiration that is needed to give us hope and faith.

I am a Junior in high school. At present, I’m enrolled in a chemistry course and physics will probably appear on my schedule for the next session.

--Anita Taylor  

Anita Taylor, much like a high schooler might today, was commenting on a story that inspired her. “The Computer Whiz Kid,” a “lanky Chicago teenager” named Robert Dodson, motivated Anita to act by showing her what was possible. Her response to the article indicates to us how literature served as a method for connecting readers and symbolic individuals as members of a single community during the mid-20th century. Dodson’s story, with its crisp photographs and clear messages of black success, progress, and creative technical ability, entered the homes of thousands of black families across the nation. In Ebony Robert Dodson as “The Computer Whiz Kid” was a symbol providing “hope” and “faith” to an ever-growing audience (101-104).

Robert Dodson is also an example of how the work of different communities of knowledge and action came together in a way that shaped an individual. Dodson’s presence on the campus and in a mixed dorm at the Illinois Institute of Technology represents a point in history where the work of civil rights activists and black betterment organizations were pushing up against the troubled history of segregation and unequal opportunity in Chicago. In 1969 he was a freshman at the Illinois Institute of Technology. Thirty years before his story, the Illinois Institute of Technology launched an architectural expansion and urban renewal projects that resulted in the removal of land wealth from segregated black communities who had built for themselves a city within the city. Growing up in the shadow of the Institute, Dodson’s hobby of choice, building and programming his computers, provided a retreat from the gang activity inundating his neighborhood while connecting him to the product of knowledge communities far away. Cambridge University in 1966 published the book he used to build his computer called We Built Our Own Computers. The book was designed to “explain some of the ideas and uses of computers to the intelligent and interested schoolboy who, it is hoped, will then be stimulated to design and build his own computing machines” (Bolt and Harcourt-xi). To make his machine, Dodson enlisted the help of family, and he also used a brief internship at the North American Company for Life and Health Insurance to play with bigger and stronger versions of what he was making at home on the dining room table.

 

The cover of the Cambridge University  book "We Built Our Own Computers' from 1966.
Figure 2: Cover of the book We Built Our Own Computers, published by Cambridge University in 1966.

Dodson’s Computer, a Group Project

Dodson’s success at transforming a hobby (building computers) into a potential career was the product of different communities who unintentionally collaborated to make a future in computing possible for him. The social-work focused, and volunteer-powered Chicago Urban League contributed to the integration of the Illinois Institute of Technology and possibly connected Dodson to education and employment opportunities. At Cambridge University, educators wanted to share computing knowledge with American youth, so they wrote and published educational books to do just that. Some other communities that contributed to Dodson’s success include groups of teachers and librarians who encouraged Dodson; vendors of electrical parts and pieces, who supplied the bits needed to build a computer; and college admission personnel who interviewed then admitted Dodson. Additionally, the finance and housing institutions, which made higher education economically possible for Dodson and the healthcare workers, who supported his fitness and readiness for dorm life, were all communities contributing to his success.

With so many “communities” influencing the opportunities possible for an individual, it can be challenging to discern what makes a community, what communities’ matter for a history of computing, and the relationship between the community and the individual. However, as a technology that constructed the symbolic narrative of Dodson and connected that narrative to a black audience, publications like Ebony Magazine were technologies that held African-American computing communities together. Dodson, through Ebony, could influence readers like Anita Taylor to act on their dreams and work towards their “high goals.” Likewise, through the story of Dodson, a special kind of black individual was constructed. This individual, not always a “Computer Whiz Kid,” was to be emulated by the reader, reproduced in black society, and was shared by the media repeatedly. Looking at media made for and by black people, African American computing communities consisted of the audience for “The Computer Whiz Kid” and people like Robert Dodson, who allowed their stories to be shared. African American computing communities also include the organizations that decided that the black readers of Ebony in 1969 needed to meet “The Whiz Kid.”

Civil rights and black betterment organizations fighting for equality and freedom sought to create symbols of black defiance, hope, future, and success. An example of their work is Rosa Parks, who be- came a symbolic individual representing defiance and resistance to the system of inequality that bolstered the segregation system but not by chance. In 1955, civil rights organizers from the NAACP waited to find the right person to build a public legal case around. Rosa Parks was not the first person of African descent to be arrested in Montgomery, Alabama, for disobeying segregation bus laws. She was, however, the one that civil rights organizers identified as being best suited for the spotlight (Tufekci 62-63).

Figure 3: The short file A Morning for Jimmy, produced by the National Urban League and Association Films, Inc., tells the story of an African American youth named Jimmy who encounters racial discrimination in his search for employment.

African American computing communities consisted of an audience hungry for a better future, civil rights and black betterment organizations fighting to make opportunity possible, and the black press that deliberately connected audience and organizers to improve the status of black people in America. As the elements that comprise African American computing communities, audience, media, symbolic individuals, and civil rights organizing are also characteristic in the history of other black communities. The literature on black labor, media, activism, class, and culture of the 19th and 20th centuries purposes that the large collective “African-American community” was formed out of smaller communities (in fields of work, in neighborhoods, on HBCU and college campuses). These smaller communities networked for full citizenship, creating cultural products (literature, language, attitudes) that organized black people nationally into a people with a distinct voice in American history. First shut out of mainstream society by racist classifications as other than American, human, and citizen, the response to this willful stifling of black futures found in histories and legacies of inequality shaped black people into a demographic with unique language, culture, and politics (Foner and Lewis 511).

When not allowed to live as full citizens, organizing for “the betterment of the Negro race” this became the missions of societies that made minority betterment the ultimate point of organizing. Professional and social organizations like Brotherhood of Sleeping Car Porter, the NAACP, The National Technical Association, Alpha Phi Alpha, Kappa Alpha Psi, Alpha Kappa Alpha, and Delta Sigma Theta have shaped the image of African-American as one that is not peripheral to the project of America. By doing so, their missions are entwined in the history of technology in America. Connected by the goal of bringing black people into the “project of America,” a project shaped by innovation and a distinct spirit of rugged individuality and materiality, they sought to democratize the labor and culture of technology. No longer would the machinery that powers America be inappropriate and inaccessible for black people because of race: the future these communities fought for was one where black people could be both black and American, black and skilled, black and professional, black and technical, and black and middle-class.

The Black Press       

African American communities of computing, like other groups in the history of computing, are made of writers, doers, and readers. Not just the remarkable men and women who fought to succeed, but the communities they belonged to and the conversations and messages they were a part of. All members of black computing communities were connected by automatic second-class status, where they were locked out, misrepresented, and stereotyped in the mainstream press. In tune with the needs of its audience, black print media was the most influential information medium for black people. This media amplified the voice of the people while explaining what the world of war, of technology, of business was and what it could mean to them. The black press, known as the "fighting press," utilized information technology to connect members in different communities for common goals or shared interests (O’Kelly 13).

In general, magazines, newspapers, and other print media are forms of public discourse that allow readers to engage with ideas, both old and new. Print media disseminates ideas by using the language and values that matter to the audience of the magazine. Language and values can be common sense beliefs regarding fairness, citizenship, and usefulness. Print media uses "frames" or the principles that organize information by referencing the social structures and realities, real or imagined, that matter to an individual or an audience. In this history, the frames used by the black press were ones that focused on the reality of black life in America: segregation and second-class status. Magazines organize information into frames so that the content is not disconnected from the social understanding of readers. This organization helps readers make sense of the new, by grounding the unknown in the familiar and "correct." When the content of magazines is computer technology, "common sense" values and power dynamics are embedded in how these new technologies are contextualized for audiences. Black newspapers framed the computer as a tool for black freedom by focusing on skill, education, professionalization, class, and materiality - issues that were already in the minds of the black public.

Looking away from black media, toward what could be called "mainstream media," the result of frames for technological diffusion are stories of computers that show them to be hosts for useful activities and social evolutions. A quick historical narrative of this framing between the 1950s to the present day found in "mainstream" magazines shows that what computers are and can-do changes as their technical capabilities develop and audiences become more familiar. The frames used to describe computers found in business magazines in the 1950s generally describe them as calculators useful for processing numerical data. Eventually, computers become more than just a calculator but a way to improve speed and efficiency-a tool for management. They are giant, powerful brains that threaten to replace workers in an expanding range of fields. To a different audience and a more advanced computer, they are not just computers but hobbies and toys. As computers become "personal" in the 1980's they are not only a computer but an extension of individuality, independence, and creativity (Cogan 248-265). By the end of the 20th century, the ability to set up networks through personal computers makes them not only computers but communication devices that are part of a global network of information sharing. As computers travel and find homes in communities of color in the U.S. and globally, they become more than just a computer but tools for development and participation in the global information economy.

The frames that reference the values, fears, truths, and realities African Americans in the century were notably different from those of their white counterparts. Likewise, print media tells us how computers were incorporated into African American life during this time and why they were incorporated differently than the mainstream publications usually studied. Not to say that black people would not have read magazines like Time or Newsweek; however, the frames used by mainstream publications were not concerned with the black perspective, thus creating the need for a black press.

Black newspapers shared the good news of opportunity, while not ignoring the harsh realities of America's racialized labor economy, they also offered "what if" scenarios. In the New Pittsburgh Courier, September 06, 1969, as a letter to the editor, Jesse Woodson JR proposed a solution for the criminal justice problem:

Dear Editor:
In view of the present inequalities which exist in the detection, prosecution, and confinement of black criminals vs. white, I think the black man would receive a great deal more fairness and impartiality from an IBM computer.

First, identify and catalog the various crimes. Next, edit the trials of the various city, state, and government courts over the past 25-years. Include all facts concerning investigations acquittals and convictions, plus the day dialogue of both the defense and the trial cancels. The computer when programmed with the above information would then be capable of rendering a decision based on the aggregate experience of the nation's law interpreters.

This decision would not take into consideration race or background. However, as likely as not, so some Mississippi court will discover the need for one computer for white and another for blacks.

--Jesse Woodson JR

What if a computer could solve the problem of racism in America? What if "race and back- ground" was irrelevant in the new technical order? We now know that computers are not impartial arbiters, and people have yet to successfully exclude race and class from computerized decision-making systems. In 1969, however, Jesse Woodson and those who read his suggestion, were mentally experimenting with the computer as a tool for freedom from prejudice and mainstream connection between black and criminal. Even this ideal "what if" came with skepticism as Woodson notes that the racist system could corrupt even "fair" computing decision-making systems in which they operate.

Conclusion

From the 1940s to the 1980s, emissaries like "The Whiz Kid,” ventured into the slowly integrating universities and offices of the information age. They ventured out, but they also reported back. Through black media, they communicated what computing meant for black people, and, as skilled workers in the new computer age, they embodied the characteristics of success. Through the technologies of storytelling, their image and traits became ingrained in community memory as necessary for the future. Because of them and the machines they controlled, new symbolic identities were formed, dismissed, and became immovable, stretching what held a community together across lives and worlds unique to the imaginations of its members.


Bibliography

Aspray, William and Donald Beaver. (1986). "Marketing the Monster: Advertising Computer Technology," Annals of the History of Computing, vol. 8, no. 2, pp. 127-143. doi: 10.1109/MAHC.1986.10038.

Bolt, A. B., Harcourt, J. C., Hunter, J. (1966). We Built Our Own Computers. Cambridge: Cambridge University Press.

Boyd, M. (2008). Jim Crow Nostalgia: Reconstructing Race in Bronzeville. Minneapolis: University of Minnesota Press.

Brown, Tamara, Gregory Parks, and Clarenda Phillips. (2012). African-American Fraternities and Sororities: the Legacy and the Vision, 2nd ed, Lexington: University Press of Kentucky.

Cogan, Brian. (2005) “Framing usefulness: An examination of journalistic coverage of the personal computer from 1982–1984,” Southern Journal of Communication, vol. 70, no. 3, pp. 248-265. doi: 10.1080/10417940509373330.

Foner, Philip and Ronald Lewis. (1983/2019) "The Black Worker from the Founding of the CIO to the AFL-CIO Merger, 1936-1955.” Philadelphia: Temple University Press, pp. 511.

Gibbons, Kelcey. (2022). Inventing the Black Computer Professional. In J. Abbate and S. Dick (Eds.), Abstractions and Embodiments: New Histories of Computing and Society (pp. 257-276). Johns Hopkins University Press.

McDonough, John and Karen Egolf. (2003). “Computers,” In The Advertising age encyclopedia of advertising, New York: Routledge.

O'Kelly, Charlotte. (Spring 1982). "Black Newspapers and the Black Protest Movement: Their Historical Relationship, 1827-1945.” Phylon, vol. 43, no. 1, pp. 13.

Taylor, Anita. (March 1970). “Computer Whiz Kid,” Ebony, pp. 17.

Taylor, Anita. (December 1969). “Computer Whiz Kid,” Ebony, pp. 101-104.

Tierney, Sherry. (2008). “Rezoning Chicago's Modernisms: 1914–2003,” (Master Thesis., Arizona State University), 6-99.

Tufekci, Zeynep. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale University Press.

Woodson JR, Jesse. (September 1969). “Job for Computer.” New Pittsburgh Courier, 14.

 

Kelcey Gibbons (August 2022). “Framing the Computer.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 54-64.


About the Author: Kelcey Gibbons is a PhD student at MIT in the History, Anthropology, Science, Technology and Society. She studies the history of the African American experience of technology with a focus on engineering and computing communities of the late 19th through to the mid 20th centuries. 


 

With its tendency to grip popular imaginaries and utopian fantasies, artificial intelligence has crystallized the enduring hope for easy technological solutions to the world’s greatest problems and fears (Haigh & Ceruzzi, 2021; Plotnick, 2018). It has been hailed as “the magic wand destined to rescue the global capitalist system from its dramatic failures” (Brevini, 2022, p. 28), and has been positioned as the linchpin of modern civil society. But, while developments in artificial intelligence technologies are commonly considered among the most important factors in shaping the modern condition, they also have exacerbated inequality, ushered in a new era of discrimination (D’Ignazio & Klein, 2020; Benjamin, 2018; Radin, 2017), left irreversible environmental damage (Brevini, 2022; Dauvergene, 2021), worsened labour struggles (Frey, 2021; Pasquale, 2020; Gray & Suri, 2019), and concentrated power – and wealth – in the hands of the privileged elite (Brevini, 2022; Crawford, 2020; Frey, 2019). As such, critically studying artificial intelligence requires a multifaceted understanding of it as being both controllable and controlling, dependent and autonomous, minimal and robust, submissive and authoritative, and determined and determinable.

To fully understand these binaries and their implications, artificial intelligence research undertaken in the humanities and social sciences warrants a long-term, historical approach that views artificial intelligence in the broader context of technological development, including the social, political, environmental, and cultural forces impacting it. This is especially the case given that the so-called “artificial intelligence boom” in academia has led to a bias towards works published in the last couple of years. But, if properly informed by the past, artificial intelligence research is more likely to prepare users for the future while also shedding light on the ways that we must act differently in the face of technological change.

Cloenda Summit Digital Future Society: Mind Map
Figure 1: Cloenda Summit Digital Future Society: Mind Map.
Courtesy of Barcelona.cat, CC BY-NC-ND 2.0.
 

​​​​​Technology development and usage carries the imprint of political, ontological, and epistemological ideologies, such that every modern technology, including and especially artificial intelligence, is an infinitesimal representative of not just what users know, but how users come to know. Insofar as the humanities and social sciences are interested in technology as an instigator of cultural change, these disciplines must centralize its historical and epistemological dimensions, and investigate how, at every major historical moment in the development of modern technology and artificial intelligence/computational systems, users have adapted to new forms of knowledge-making.

Although most research in humanities and social sciences exhibits some kind of historical immediacy, it tends to be detached from larger epistemological considerations that align with major historical moments of change. Understanding, at each major technological juncture, how technology users come to know, may be crucial to developing better knowledge about technology (including artificial intelligence), its users, and the world.

This research would involve a multifaceted, interdisciplinary methodology that is both “anti-modern” and philosophical. Edwards (2003), for example, suggests that any historical and archival method to technological inquiry necessarily avoids falling into the trap of “technological determinism” that plagues so much current artificial intelligence research, especially those conducted through short-term analyses. Selective attention primarily to the “modern” aspects of infrastructures can produce blindness to other aspects that may, in fact, be “anti-modern”; as Golumbia (2009) contends, irrespective of “new” technologies, human societies remain bound by the same fundamental forces that they always have been, so technological changes are best seen as shifts in degree rather than in kind. For this reason, technology ought to be assessed with reference to the past, especially because the computer apparatus leaves “intact many older technologies, particularly technologies of power, yet puts them into a new perspective” (Bolter, 1984, p. 8).

US Navy Cryptanalytic Bombe
Figure 2: US Navy Cryptanalytic Bombe.
Courtesy of brewbooks, CC BY-SA 2.0.

This approach to artificial intelligence research would model a different kind of temporal orientation for the humanities and social sciences that is rooted in the recognition that both ethereal, “cloud-like” technologies and “resolutely material media” (Mattern, 2017) have always co-existed. Because the old and the new necessarily overlap, it is important to draw archival linkages to produce more precise and comprehensive evaluations of technology and technological change. As Chun (2011) notes, new media races simultaneously towards the future and the past in that “the digital has proliferated, not erased, [historical] media types” (p. 11, 139).

An historical way forward may also be key to confronting and dismantling algorithmic coloniality, the idea that colonial logics are replicated in computational systems, including in how sovereignty and land exploitation are embedded in the digital territory of the information age (Mohamed, Isaac, & Png, cited in Acemoglu, 2021; Lewis et al., 2020; Radin, 2017). Algorithmic coloniality suggests that the dominance and manipulative power of the world’s largest technology corporations mirrors traditional strategies of imperial colonizers (Brevini, 2022, p. 95). While the benefits of technological innovation accelerate economic gains for the privileged elite, Mohamed, Isaac, and Png (cited in Acemoglu, 2021) argue that any pathway to shared prosperity must address colonial legacies and three distinct forms of algorithmic harms: algorithmic oppression, exploitation, and dispossession (p. 61). Doing so is not only consequential for people who identify as being Indigenous; it may provide the tools necessary for intervening in the perpetuation of discrimination, generally (Radin, 2017). This, Lewis et al. (2020) claim, forms a powerful foundation to support Indigenous futurity (p. 65) while injecting artificial intelligence development with new ontologies whose imaginations and frameworks are better suited to sustainable computational futures (p. 6).

Extending from this, an historical approach may also be key to recognizing “non-Western,” alternative ways of knowing and being, including how “non-Western” technology may influence future iterations of artificial intelligence technologies. This is made clear in the Indigenous Protocol and Artificial Intelligence Working Group’s explanation of the potential links between artificial intelligence technologies and both the Octopus Bag – a multisensorial computing device – and Dentalium – tusk-like shells filled with “computational fluid dynamics simulations” (Lewis et al., 2020, pp. 58-69). This approach, however, may present methodological challenges as researchers try to embrace the nourishing aspects of our traditional value systems while still accommodating modernity.

An historical approach may also serve environmental considerations well, especially in the context of the humanities and social sciences. Adequate research on renewability, ecofuturisms, and the environmental costs of artificial intelligence should span the entire production chain, including the historical circumstances in which those “productive” relationships arose. This view is critically important to exposing the environmental effects of technology, while recognizing that both ecological and social precarity caused by technology is not just a timely and urgent idea, but also one with a rich history. Too much recent and short-term research looks at the ecological impacts of artificial intelligence as a “new” phenomenon, rather than one that replicates historical trends albeit through modern consumption rates (which make environmental effects seem historically unique). Informed by the past, environmental research about technology is more likely to prepare users for the future while also shedding light on the ways that we may want and need to act differently in the face of technological change.

An historical approach to studying artificial intelligence may also help us to: 1) re-evaluate the consumptive ideologies underpinning environmental AI discourse; 2) begin to view data as a non-renewable resource; 3) construct a new genealogy of contemporary technological culture that centers bodily subjects; and, 4) perhaps even consider acting against technological progressivism by halting the production of new “innovations” that “datafy” manual or semi-manual sectors and technologies, merely for the sake of it.

These suggestions would challenge the dominance of artificial intelligence technologies, provide different ways to imagine technological innovation and its cultural implications, and re-envision a world that may not rely on technology to solve the most pressing social, environmental, and political questions. These perspectives could also drastically change our view of the relationship between people, energy, and information. Although these considerations may seem radical and aspirational, they are necessary if we want to reorient perspectives in artificial intelligence research and think about the agents – both human and non-human – that are served and impacted by today’s dominant visions for the future of technology.

 

Octopus Bag
Figure 3: Octopus Bag 
Courtesy of https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdf .
 

Utopian and idealistic views of artificial intelligence are justified by a host of corporate, governmental, and civil actors, who have four major reasons for supporting the continued use and development of artificial intelligence:

  1. Leveraging computational speed to make work more efficient;
  2. Appearing to improve the perceived accuracy, fairness, or consistency of decision-making (in light of so-called “human fallibility”);
  3. Similarly, appearing to depoliticize decision-making by placing it out of reach of human discretion; and,
  4. Deploying artificial intelligence technologies to solve pressing environmental issues.

These motivations, especially when replete of historical consideration, have led to an automation bias whereby humans tend to trust computational tools more than they probably should. This raises distinct concerns about oversight and responsibility and about the ability to seek recourse in the wake of computational error. In other words, any motivation to use and deploy artificial intelligence technologies necessarily presses up against regulatory, legal, and ethical questions because, at its core, artificial intelligence can distort peoples’ perception of each other and the structures and systems that govern their lives. This is especially true when such technology is viewed as being inherently modern, rather than merely part of a longer, historical lineage of technological advancement.

Dentalium
Figure 4: Dentalium
Courtesy of 
https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdf

In this sense, studying artificial intelligence with an historical orientation is as much about people, culture, and the world, as it is about the technology itself. Artificial intelligence is people-populated. It is reliant on human bodies and brains. It is dependent on human hands and eyes. It is fueled by us. But technochauvinism and techno-optimism (both inherently modernist ideologies) hinder our ability to see this. Instead, artificial intelligence perpetuates the fantasy of ever-flowing, uninterrupted, and curated streams of information, technological solutionism, and optimism about artificial intelligences’ ability to solve the world’s most pressing questions – as long as it’s designed with “humans in the loop.” This framing, though, limits and constrains human agency and autonomy by positioning humans as a mere appendage to the machine. This view relies only on small tweaks to the current automated present and fails to account for artificial intelligence imaginaries informed by the past that may better address the harms and inequities perpetuated by current artificial intelligence systems.

A strictly modernist approach to artificial intelligence and automation in general has hampered people’s ability to imagine alternatives to artificial intelligence systems, despite overwhelming evidence that the integration of those systems into our everyday lives disproportionately benefits the wealthy elite and creates undue harm to vulnerable groups (Acemoglu, 2021; D’Ignazio & Klein, 2020; Benjamin, 2018; Radin, 2017). This is because, without an historical orientation, it is natural – and easy – to view artificial intelligence as not only representative of the future, but also as actively shaping it by both opening and closing imaginative possibilities of what the world can become with the “help” of new technologies.

Instead, I’d like to draw attention to an alternative vision: what if we resist the urge to build, deploy, and use new computational systems? What if we begin to realize that technology might not be our world’s saviour? What if we choose to slow down and work intentionally and mindfully instead of quickly? These questions are not meant to elide the important computational work currently carried out by and through artificial intelligence systems, including and especially in medical applications and in services that are too dangerous for human actors to perform. Instead, this alternative vision for the future, which is deeply rooted in historicity, simply resists viewing technology as determined, and instead sees it as being determinable. It reorients power in the favour of human agents rather than technological ones.

Perhaps the “AI question” can only be solved when people are empowered to imagine futures beyond the dominance of techno-utopianism. After all, new imaginaries are really mostly dangerous to those who profit from the way things currently are. Alternative futurisms have the power to show that the status quo is fleeting, non-universal, and unnecessary, and although artificial intelligence has changed the world, people have the ultimate power to shape it.


Bibliography

Acemoglu, D. (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Massachusetts: MIT Press.

Benjamin, R. (2019). Race after technology. Cambridge: Polity Press.

Bolter, J. (1984). Turing’s man: Western culture in the computer age. University of North Carolina Press.

Brevini, B. (2021). Is AI good for the planet? Cambridge: Polity Press.

Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. Massachusetts: MIT Press.

Chun, W. (2011). Programmed visions: Software and memory. Massachusetts: MIT Press.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press.

Dauvergne, P. (2021). AI in the wild: Sustainability in the age of artificial intelligence. Massachusetts: MIT Press.

D’Ignazio, C., and Klein, L.F. (2020). Data feminism. Massachusetts: MIT Press.

Edwards, P.N. (2003). Infrastructure and modernity: Force, time, and social organization in the history of sociotechnical systems. In Modernity and Technology (eds. Misa, T.J., Brey, P., and Feenberg, A.). Massachusetts: MIT Press.

Frey, C.B. (2021). The technology trap: Capital, labor, and power in the age of automation. New Jersey: Princeton University Press.

Golumbia, D. (2009). The cultural logic of computation. Massachusetts: Harvard University Press.

Gray, M., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Mariner Books.

Haigh, T., and Ceruzzi, P.E. (2021). A new history of modern computing. Massachusetts: MIT Press.

Lewis, J. et al. (2020). Indigenous Protocol and Artificial Intelligence: Position Paper.

Indigenous Protocol and Artificial Intelligence Working Group. https://www.indigenous-ai.net/position-paper/

Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Massachusetts: Harvard University Press.

Radin, J. (2017). “Digital natives”: How media and Indigenous histories matter for big data. Osiris, 32(1).

Schwab, K. (2017). The fourth industrial revolution. New York: Penguin.

 

Helen A. Hayes (May 2022). “New Approaches to Critical AI Studies: A Case for Anti-Modernism and Alternative Futurisms.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 45-53.


About the Author: Helen A. Hayes is a Ph.D. Candidate at McGill University, studying the relationship between artificial intelligence, its computational analogs, and the Canadian resource economy. She is also a Policy Fellow at the Centre for Media, Technology, and Democracy at the Max Bell School of Public Policy. She can be reached at helen.hayes@mcgill.ca or on Twitter at helen__hayes.


 

JOSEPHINE, THE AVERAGE AMERICAN FEMALE, AND JOE JR., A TYPICAL 6-YEAR-OLD
Figure 1. This is Josephine, the average American female, and Joe Jr., a typical 6-year-old. From the Collection of Smithsonian Institution Libraries.

Cybernetics, an intellectual movement that emerged during the 1940s and 1950s, conceived of the body as an informational entity. This separation of the mind and body, and the prioritization of the mind as a unit of information, became a liberating quality as the capitalist world of industrialism, with its mechanical and earthly labor, bound the liberal subject in shackles. The cybernetic subject, in contrast, as “a material-information entity whose boundaries undergo continuous construction and reconstruction,” floated in the permeable ether of information and technology (How We Became Posthuman, 3). Marxist issues of social alienation and scarcity were resolved by the interconnectedness of information-based beings, and hierarchical labor relations were replaced with more communal forms of exchange. A new utopia was thus formed with the advent of digital communication (Brick, 348).

This dematerialized, cybernetic body converged with the creation of technology through the work of the industrial designer Henry Dreyfuss. Dreyfuss, who drafted what can be considered early user personas out of data collected from the military, utilized these imagined bodies for the testing of physical products. Dreyfuss’ designs, or what he labeled as “Joe and Josephine,” quantified the human experience. This model of testing and iterating designs based on dematerialized conceptions of the body was later incorporated into the development of technology by computer scientists such as Ben Shneiderman, who claimed in a 1979 paper that Dreyfuss’ emphasis on the human experience must be considered by engineers and designers. As scholars such as Terry Winograd and John Harwood claim, Dreyfuss’ methodology became the model for user testing that has remained relevant for interaction designers ever since its publication in 1955.

However, as Katherine Hayles argues in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1999), the dematerialized body as conceived of by Dreyfuss is problematic. To put it simply, “equating humans and computers is especially easy” if the mind is both an egoless and informational resource to be shared (How We Became Posthuman, 2). Yet, this sort of epistemology neglects embodied and subjective experiences. Race, class, and gender relations cannot be erased by what she labels the “posthuman,” and while Hayles published her book over two decades ago, this issue is still pressing in the field of design. As Sasha Constanza-Chock describes in their book Design Justice: Community Led Practices to Build the Worlds We Need, a “nonbinary, trans, femme-presenting person,” is unable to walk through an airport scanner without getting stopped because the system has been built to represent “the particular sociotechnical configuration of gender normativity” (How We Became Posthuman, 2). The system, in identifying and classifying the body as information, misses crucial identities. In a paper published in 2018 titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” authors Timnit Gebru and Joy Buolamwini examined a similar problem of bodily erasure (Buolamwini and Gebru, 2018). Gebru and Buolamwini found that facial recognition systems trained on biased data sets representing faces of mostly white men will, unexpectedly, become biased. The bodies of women and people of color, in this example, are made invisible through their translation into information. As Aimi Hamraie writes in their book Building Access: Universal Design and the Politics of Disability:

Ask an architect about their work, and you may learn about the style, form, materials, structure, and cost of a building than the bodies or minds mean to inhabit it. Examine any doorway, window, toilet, chair, or desk in that building, however, and you will find the outline of a body meant to use it. From a doorframe’s negative space to the height of shelves and cabinets, inhabitants’ bodies are simultaneously imagined, hidden, and produced by the design of built worlds. (Hamraie, 19)

Theme Center - Democracity - Model of towns and countryside
Figure 2. "Democracity" designed by Henry Dreyfuss for the 1939
World's Fair in New York City.  
Courtesy of Manuscripts and Archives Division, The New York Public Library. (1935 - 1945).

Architects, industrial designers, and interaction designers wield power when they craft who they imagine will use their built environments, and when they ignore their own biases, designs are built to reify hegemonic systems. There is thus a larger issue of disembodiment which needs to be researched as it relates to the contemporary methodologies of interaction designers.

The relationship between designers and human bodies has a long history. As Christina Cogdell argues in Eugenic Design, the “scientific possibilities of controlling heredity through environmental manipulation inspired reform-minded architects and designers” during the early twentieth century, specifically (Cogdell, 16). Cogdell finds that early industrial designers such as Dreyfuss were swept up in a movement to “streamline” design much in the same way that eugenicists looked to “streamline” the human body (Cogdell, 52 - 53). Cogdell cites examples such as the 1939 New York World’s Fair which featured Dreyfuss’ work against a backdrop that used streamlining as a medium through to promote democracy (Cogdell, 2004). Events such as these demonstrate that the industrial desire to create “perfect” environments and “perfect” bodies was not unique to the United States. In Eugenics in the Garden: Transatlantic Architecture and the Crafting of Modernity, Fabiola López-Durán argues that Lamarckian evolution was “an invitation to social engineering” for Latin American nations at the turn of the twentieth century (López-Durán, 4). This form of evolutionary theory fostered an “orientation toward environmental-genetic interaction, empowering an apparatus that made race, gender, class, and the built environment critical measurements of modernity and progress” (López-Durán, 4). While Dreyfuss was engaged with this period of industrial design, this paper departs from these histories by situating Dreyfuss within the post-war era. Nevertheless, this paper recognizes that Dreyfuss’ connection to streamlined bodies may have informed his notion of user-testing, and this is an important consideration when reviewing images of Joe and Josephine.

In this essay, I will explore the cybernetic conception of the body as it relates to the development of technology. More specifically, I will argue that user testing practices, conceived within the historical and cultural context of cybernetics, envisioned that any human figure might represent all human figures. However, as examined previously, this perception of the body as universal ignores the subjective, material, and embodied experiences of users, contributing to the biased systems we see today. This proposed paper will begin with an exploration of the cybernetic notion of the body. It will then explore how this concept converged with the advent of user testing practices and the development of user personas, or skeuomorphic designs used for the creation of digital products. It will, lastly, attempt to correct the histories of industrial design and interaction design by reconfiguring the work of Dreyfuss. These efforts will hopefully extend contemporary literature such as the work of Costanza-Chock, Gebru, Buolamwini, and Hamraie, through a re-examination of history.

1950s Cybernetics

Cybernetics emerged as a dominant field in the 1950s through the work of Norbert Wiener and the publication of The Human Use of Human Beings (1950). In The Human Use of Human Beings, Wiener describes a type of communicative society in which humans act as individual, informational units, or automata. These informational, monadic systems relay messages to one another, and through the process of feedback, establish homeostasis. There is thus both a teleological and biological aspect to early descriptions of cybernetics. Like a beehive which has been disrupted, or a flock of geese attempting to take flight, all units must find their place through the interaction and exchange of information with others. This artful dance prioritizes utilitarianism and positivism. The gathering of information through interaction is essential, and in this way, each monad learns to operate as a collective, resisting natural entropic dissolution. The body is thus an extension of and harbor for information. As Weiner writes:

...the individuality of the body is that of a flame rather than that of a stone, of a form rather than of a bit of substance. This form can be transmitted or modified and duplicated...When one cell divides into two, or when one of the genes which carries our corporeal and mental birthright is split in order to make ready for a reduction division of a germ cell, we have a separation in matter which is conditioned by the power of a pattern of living tissue to duplicate itself. Since this is so, there is no absolute distinction between the types of transmission which we can use for sending a telegram from country to country and the types of transmission which at least are theoretically possible for transmitting a living organism such as a human being. (Weiner, 102 – 103)

The mechanisms of the body, and their ability to maintain life and homeostasis, provide inspiration for the natural, organic order of cybernetics, but nothing more. Information, messages, and communication are key, while the embodied experience, insofar as it is not used to relay messages, is inconsequential.

As Katherine Hayles argues in her article “Boundary Disputes: Homeostasis, Reflexivity, and the Foundations of Cybernetics,” this divorce of the body from information was essential in the first wave of cybernetics. Hayles outlines three waves of cybernetics, the first two of which concern our argument here. The first wave from 1945 - 1960 “marks the foundational stage during which cybernetics was forged as an interdisciplinary framework that would allow humans, animals, and machines to be constituted through the common denominators of feedback loops, signal transmission, and goal-seeking behavior” (“Boundary Disputes”, 441 – 467).This stage was established at the Macy conferences between 1946 and 1953 and it was at the Macy conferences, Hayles argues, where humans and machines were “understood as information-processing systems” (“Boundary Disputes”, 442). It is also within this first-wave that homeostasis was perceived as the goal of informational units. Following the chaos and disillusionment of World War II, first-wave cyberneticians found stability to be paramount. The Macy conferences were thus focused on this homeostatic vision.

However, psychoanalytical insight at the conference helped sow ideas for second-wave cybernetics. If man is to be viewed as a psychological subject, in translating the output of one machine into commands for another, he introduces noise into the teleological goal of homeostasis. This reflexive process, or one in which the autonomy of both subjects is to be considered, disrupted the first-wave one-directional model. As Hayles describes, Lawrence Kubie, a psychoanalyst from the Yale University Psychiatric Clinic “enraged other participants [at the conference] by interpreting their comments as evidence of their psychological states rather than as matters for scientific debate” (“Boundary Disputes,” 459). Nevertheless, while the issue of reflexivity may not have won at the Macy conferences, it later triumphed over homeostasis through the work of biologist Humberto Maturana. Maturana rescued the notion of reflexivity by emphasizing that through the rational observer, the black box of the human mind might be quantified. This new feedback process introduced an autopoietic version of reflexivity in which both man and machine might improve through interaction, resolving the threat of subjectivity. Through both waves of cybernetics, cyberneticians instantiated the concept of the body as immaterial.

Designing for People, Joe, and Josephine

The cybernetic body converged with the development of technology in the 1950s through the work of the industrial designer Henry Dreyfuss. Dreyfuss, who was considered to be one of the most influential designers of his time, developed a model for user-testing through skeuomorphic designs which quantified the human experience. While Dreyfuss was not the first to conceive of user testing, he was the first to develop popular user personas. As Jeffrey Yost notes in Making IT Work: A History of the Computer Services Industry, the RAND cooperation’s Systems Research Laboratory conducted a simulation study labeled Project Casey that used twenty-eight students to test man-machine operations for early warning air defense (Yost, 2017). The practice of interviewing early adopters of a system continued into the 1960s in time-sharing projects such as Project MAC in which psychologists such as Ulric Neisser interviewed users about their phenomenological experience with a computer system. It was Dreyfuss, however, who developed pseudo-users that might be used on a wide scale. While command and control computing and human factors researcher demanded testing for specialized users, Dreyfuss aimed, as an industrial designer, to create products for the masses. He therefore looked to craft images of what he deemed lay people for the creation of physical products.

First recognized in his book Designing for People (1955), Joe and Josephine represent Dreyfuss’ perception of the “average'' man and woman. They have preferences and desires, they are employed, and most importantly, they are forms of a Platonic ideal that can be used for testing products. Like cyberneticians such as Maturana, Dreyfuss seems to have recognized the reflexivity between man and machine. Using Joe and Josephine, Dreyfuss tested the interaction between a product and its imagined user in order to improve its usability. Dreyfuss’ book was met with much praise, attesting to the importance of his new model. A review in The New York Times from 1955 titled “The Skin-Men and the Bone-Men” credits Dreyfuss for being a “skin man” who hides the complexity of a mechanism behind its skin (Blake, 1955). In a review from The Nation from the same year, author Felix Augenfeld also credits Dreyfuss for a “his fantastic organization and an analysis of his approach to the many-sided problems the designer must face” (Augenfeld, 1955). Joe and Josephine were thus considered innovative figures upon their publication.

As machine-like entities, Joe and Josephine reflect the discussions of the Macy conferences, and as models for user-testing, they resemble second-wave reflexivity. However, it is unclear what interactions Dreyfuss had with cybernetics during the 1950s. In an article titled “A Natural History of a Disembodied Eye: The Structure of Gyorgy Kepes's ‘Language of Vision’” author Michael Golec describes letters between the cybernetician Gyorgy Kepes and Dreyfuss from the early 1970s (Golec, 2002). Dreyfuss also illustrated a chapter of Kepes’ book Sign, Image, Signal (1966), indicating another touch point between the designer and the cybernetician (Blakinger, 2019). The cybernetician Buckminster Fuller wrote the introduction to a publication by Dreyfuss titled Symbol Sourcebook: an Authoritative Guide to International Graphic Symbols (1972), providing a final touch point between Dreyfuss and cybernetics. Nevertheless, there is no direct evidence that Dreyfuss knew of the Macy conferences, and this question needs more research.

Despite the question of Dreyfuss’ interaction with cybernetics, Dreyfuss’ new model was adopted into cybernetic software and hardware development processes by the 1970s. In a paper by computer scientist Ben Shneiderman titled “Human Factors Experiments in Designing Interactive Systems” (1979), Shneiderman cites Dreyfuss as someone who provides “useful guidance” for the development of computer systems (Shneiderman, 9). Shneiderman also credits Dreyfuss with a user centered approach that prioritizes the friendliness and compatibility of computer systems with their human users. He advocates for “personalizing” the computer by using human testers, and while he does not directly mention Joe and Josephine, he does state that designers should know their users (Shneiderman, 11). Shneiderman, additionally, cites various cybernetic articles, merging Drefyuss with cybernetics once again. This process of crafting personas to test prototypes, outlined by Shneiderman, is a practice which has continued into the present day.

The work of scholars such as John Harwood and Terry Winograd demonstrate the permanence of Joe and Josephine in the history of technology. In The Interface: IBM and the Transformation of Corporate Design, 1945 – 1975, Harwood describes The Measure of Man, a 1959 publication by Dreyfuss which expounded on Joe and Josephine. Harwood finds that The Measure of Man is the primary source for graphic and ergonomic standards within the United States, England, and Canada. He cites that it is “the first and most important, comprehensive collection of human engineering or ergonomic data produced explicitly for architects and industrial designers” (Harwood, 94). Winograd echoes Harwood’s claims in an article titled “Discovering America: Reflections on Henry Dreyfuss and Designing for People.” Winograd notes that Dreyfuss has been a key figure in the creation of courses for Stanford’s d.school, as he is understood as having created the model for empathizing with the user via Joe and Josephine (Winograd, 2008). Both Winograd and Harwood cast back a common perception that Dreyfuss initiated a Kuhnian paradigm shift in the field of design. Through Joe and Josephine, Dreyfuss assisted designers in moving away from the linear development model of Fordism and towards one of circular, iterative, feedback. Yet, it is precisely this heroic view of Dreyfuss that I wish to contest, for although Dreyfuss’ work is significant, Joe and Josephine introduced the use of biased data into product development. Indeed, Winograd makes mention of this flaw when he cites that with Joe and Josephine we must also “keep visible reminders of the subtler and less easily depictable social and cultural differences that determine the compatibility of people with products and interfaces…” (Winograd, 2008). However, I argue there is a deeper issue here which is emboldened by cybernetic theory and hidden in the construction of Joe and Josephine. While Joe and Josephine represent the “average” man and woman according to Dreyfuss, they also reflect his bias as a designer and his inability to recognize the quantified body as subjective.

Drawing 36. The Measure of Man and Woman
Figure 3. Tilley, Alvin and Henry Dreyfuss and Associates. (1993) Drawing 36. The Measure of Man and Woman.

The Designer as World Builder

In tracing the transition from homeostasis to reflexivity, Hayles makes note of a complication which elucidates this issue. In analyzing the work of Humberto Manturana and Francisco Vaerla, two second-order cyberneticians, she finds that Maturana and Varela were system builders that created a system by drawing boundaries to decipher what was to be included inside, and what was out (How We Became Posthuman, 188). As Hayles writes, “Consistent with their base in the biological sciences, Maturana and Varela tend to assume rational observers…Granting constructive power to the observer may be epistemologically radical, but it is not necessarily politically or psychologically radical, for the rational observer can be assumed to exercise restraint” (How We Became Posthuman, 188). The solution to reflexivity conceived in second-order cybernetics is therefore flawed. If the rational observer can quantify the human subject, who is it that edits the observer? An image by computer scientist Jonathan Grudin visualizes this idea. In “The Computer Reaches out: The Historical Continuity of Interface Design,” Grudin sketches the feedback process between the user and the computer (Grudin, 1989). In the image, a computer reaches out to a user, and the user reaches back. The user is also connected to a wider network of users, who reach back to the user, and therefore to the computer as well. In this system, there is an endless chain of interaction between the user/observer, calling into question who is observing whom. As such, no one user can claim to be a world-builder, as they are enmeshed in a socio-material environment.

Dreyfuss, however, claims this title. Joe and Josephine not only represent universal versions of man and woman like Adam and Eve, but they are the “hero” and “heroine” of Designing for People. Yet, as Russell Flinchum writes in the book Henry Dreyfuss, Industrial Designer: the Man in the Brown Suit, a “hodgepodge” of information was interpreted by Dreyfuss’ designer Alvin Tilley to construct Joe and Josephine (Flinchum, 87). Additionally, while the exact reports that Dreyfuss drew from are unclear, we can surmise from which reports he drew. In an oral history with Niels Diffrient, one of Dreyfuss’ designers who later iterated on Joe and Josephine, Diffrient states:

 ...Henry himself had the brilliance, after the Second World War, in which he had done some wartime work of carrying on what he'd learned about human factors engineering...You see, a lot of the war equipment had gotten so complex that people didn't fit into things and couldn't operate things well, like fighter planes, all the controls and everything...So a specialty grew up — it had been there, but hadn't gone very far — called human factors engineering...we found out about these people who were accumulating data on the sizes of people and began to get a storehouse, a file, on data pulled together from Army records, the biggest of which, by the way, and the start of a lot of human factors data, was the information they had for doing uniforms because they had people of all sizes and shapes. (Oral History with Niels Diffrient, Archives of American Art, 2010).

In a later letter written to Tilley, Tilley is asked about the specific type of Army data, helping to track which files Drefyuss may have obtained. The inquirer states that “‘...Douglas Aircraft called to ask if it [The Measure of Man] was available...He asked if the source or sources from which all this data was gathered has been noted’” (Archives of American Art, 2010). Dreyfuss, who had worked on projects for the Vultee Aircraft company during the war, is therefore likely to have used Air Force data as a major source for Joe and Josephine (Flinchum, 1997). A report on anthropometric military processes from the war validates this claim. The report, titled “Sampling and Gathering Strategies for Future USAF Anthropometry” mentions that the work of Francis Randall at Wright Field was an excellent example of proper data collection practices during WWII (Churchill, Edmund, and McConville). Randall’s document, or “Human Body Size in Military Aircraft and Personnel Equipment,” involves countless drawings of fighter pilot dimensions (Randall, 1946). In the book The Measure of Man and Woman, which improved on the designs of Joe and Josephine, Dreyfuss’ team appears to have been inspired by the depictions of fighter pilots in Randall’s work. A comparison of an image of Joe in a compartment with images of fighter pilots demonstrates how closely aligned Dreyfuss was to military practices.

However, Randall’s report also indicates the long-standing practice of classifying and quantifying bodies based on normative standards prevalent within a specific cultural moment. The manipulation of bodies for military data collection practices, and the exclusion of bodies that do not fit a certain “norm,” from these data sets, has a long history that cannot be revisited here, but which indicates that the inspiration for Joe and Josephine was based on biased data. Consequently, the shapes of the Joe and Josephine personas, which influenced heavily both industrial design and computer design practices, represent biased images. There must be continued investigation into which reports Dreyfuss gathered, but it appears likely that he used skewed data to construct his influential designs.

Human Body Size in Military Aircraft and Personal Equipment.
Figure 4. Randall, Francis et al. (1946) Human Body Size in Military Aircraft and Personal Equipment. Dayton, OH: War Department, Army Air Forces, Air Material Command.

Dreyfuss Today

It is difficult to measure the outcome of such flawed practices, but the work of Dreyfuss has resonated throughout the century. The ripple effect of Joe and Josephine, and the countless products drafted from these designs, brings forth a new variable to consider in the construction of digital products. This paper is therefore a response to the many accounts which have canonized Dreyfuss within the history of industrial design, and consequently, the history of interaction design. As demonstrated through the reference to Winograd, Dreyfuss’ efforts as are taught in the classroom. However, through the conception of both real and imagined spaces, designers envision an ideal user, and this user can either represent the multiplicity of complex, messy, and beautiful bodies, or it can represent a “universal” ideal which never truly existed. Tracing the genealogy for these imagined users to their origins is essential for improving the testing practices of our modern moment.


Bibliography

Augenfeld, F. (1955, August 6). Masterpieces for Macy's. The Nation.

Blake, P. (1955, May 15). The Skin Men and the Bone Men. The New York Times.

Blakinger, J. R. (2019). Gyorgy Kepes: Undreaming the Bauhaus. Cambridge, MA: The MIT Press.

Brick, H. (1992). Optimism of the mind: Imagining postindustrial society in the 1960s and 1970s. American Quarterly, 44(3), 348. doi:10.2307/2712981

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on Fairness, Accountability, and Transparency, Proceedings of Machine Learning Research.

Churchill, E., & McConville, J. T. (1976). Sampling and data gathering strategies for future USAF anthropometry. Wright-Patterson Air Force Base, OH: Aerospace Medical Research Laboratory.

Cogdell, C. (2010). Eugenic design: Streamlining america in the 1930s. Philadelphia, PA: University of Pennsylvania Press.

Costanza-Chock, S. (2020). Design Justice. Cambridge, MA: The MIT Press.

Dreyfuss, H. (1976). Measure of Man. Watson-Guptill.

Dreyfuss, H. (2012). Designing for People. New York, NY: Allworth Press.

Dreyfuss, H. (2014). Posters, The Measure of Man (Male and Female) [Cooper Hewitt Design Museum]. Retrieved 2022, from https://collection.cooperhewitt.org/objects/51497617

Erickson, T., Winograd, T., & McDonald, D. (2008). Reflections on Henry Dreyfuss, Designing for People. In HCI Remixed: Essays on Works That Have Influenced the HCI Community. Cambridge, MA: MIT Press.

Flinchum, R. (1997). Henry Dreyfuss, Industrial designer: The man in the brown suit. New York: Rizzoli.

Golec, M. (2002). A Natural History of a Disembodied Eye: The Structure of Gyorgy Kepes's Language of Vision. Design Issues, 18(2), 3-16. doi:10.1162/074793602317355747

Grudin, J. (1989). The Computer Reaches Out: The Historical Continuity of Interface Design. DAIMI Report Series, 18(299). doi:10.7146/dpb.v18i299.6693

Hamraie, A. (2017). Building access: Universal design and the Politics of Disability. Minneapolis, MN: University of Minnesota Press.

Harwood, J. (2016). Interface: IBM and the Transformation of Corporate Design, 1945-1976. Univ Of Minnesota Press.

Hayles, N. K. (1994). Boundary disputes: Homeostasis, reflexivity, and the foundations of Cybernetics. Configurations, 2(3), 441-467. doi:10.1353/con.1994.0038

Hayles, N. K. (2010). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.

López-Durán, F. (2019). Eugenics in the garden: Transatlantic architecture and the crafting of modernity. Austin, Texas: University of Texas Press.

Oral history interview with Niels Diffrient. (2010). Retrieved March 7, 2022, from https://www.aaa.si.edu/collections/interviews/oral-history-interview-niels-diffrient-15875

Randall, F. E. (1946). Human Body Size in Military Aircraft and Personal Equipment. Dayton, OH: Army Air Forces Air Material Command.

Randall, F. E. (1946). Human body size in military aircraft and personal equipment. Dayton, OH: Army Air Forces Air Material Command.

Tilley, Alvin and Henry Dreyfuss and Associates. (1993) Drawing 36. The Measure of Man and Woman.

Shneiderman, B. (1979). Human Factors Experiments in Designing Interactive Systems. Computer, 12(12), 9-19. doi:10.1109/mc.1979.1658571

Vultee Aircraft, Inc., military aircraft. (n.d.). Retrieved March 7, 2022, from https://www.loc.gov/item/2003690505/.

Wiener, N. (1967). The Human Use of Human Beings: Cybernetics and Society. New York, NY: Avon Books.

Yost, J. R. (2017). Making IT Work: A History of the Computer Services Industry. Cambridge, MA: MIT Press.

 

Caitlin Cary Burke (March 2022). “Henry Dreyfuss, User Personas, and the Cybernetic Body.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 32-44.


About the author: Caitlin Burke is a Communication PhD student at Stanford University, where she studies user experience design, design ethics, media history, and human-computer interaction.


 

 

Figure 1: Queen Elizabeth II touring Burroughs Strathleven factory 1953.  Courtesy of Charles Babbage Institute Archives.
Figure 1: Queen Elizabeth II touring Burroughs Strathleven factory 1953.
Courtesy of Charles Babbage Institute Archives.

The first finding is that long before computers, the Internet, or social media became available, people on both sides of the Atlantic were heavily dependent on organized (usually published) information on a regular basis. Studies about the history of the week, children’s education, work-related activities, and devotion to religious and community practices have made that clearly obvious. The challenge for historians now, therefore, is to determine how best to catalog, categorize, and study this sprawling subject of information studies in some integrative rationale fashion. Do we continue to merely study the history of specific pieces of software or machinery, ephemera such as newspapers and books, or providers of information such as publishers and Google?

A Framework for Studying Information’s History

In my three-volume Digital Hand (2004-2008) and subsequently in All the Facts: A History of Information in the United States Since 1870 (2016), I shifted partially away from exploring the role of providers of information and its ephemera toward how people used facts—information and data. In the research process, categories of everyday information began to emerge. First, periods—think epochs, eras—did too. Second, as with most historical eras these overlapped as well, signaling a changing world.

The same held true for the history of the types and uses of information, and, of course, the technologies underpinning them. We still use landlines and smartphones, we still fill out paper forms asking for information requested for a century and fill out online ones, and of course use First Class mail and e-mail. Publisher’s Weekly routinely reports that only 20 percent of readers consume some e-books; 80 percent of all readers still rely on paper books, so old norms still apply. Apple may post a user manual on its website but buy an HP printer and you are likely to find a paper manual in its shipping box.

All the Facts reported that there were types of information ephemera that existed from the 1800s to the present, supplemented by additional ones that did not replace earlier formats. Obvious new additions were electrified information, such as the telegraph, telephone, radio and TV. Paper-based information was better produced by people using typewriters, better quality pens and pencils and stored in wood later metal file cabinets, 3 x 5 cards, still later computers, PCs, smartphones, and now digital doorbell databases. Each improvement also made it more flexible to store information in logical ways, such as on 3 x 5 cards or folders.

A side table in American living room, ca. 1930s
Figure 2: A side table in American living room, ca. 1930s.

The volume of their use grew massively; humble photographs of the interiors of homes and offices taken over the past 150 years illustrates that behavior as does the evolution of the camera, which too is an information-handling device. Commonly used ephemera across the entire period include, specifically, newspapers, magazines, books, telegraphy, telephones, radios, television, personal computers, smartphones and other digital devices, all arriving in that order. So, any chronology or framework should take into account their use. If you are reading this essay in the 2020s, you are familiar with the myriad uses to which you have relied on information and the appropriation of these devices with the probable exception of the telegraph, which passed into history by the early 1970s.

A second category of activities that any framework needs to incorporate, because they remained constant topics of concern across the entire two centuries, concerns information people needed with which to lead their personal lives, such as medical information to cure illnesses, political information to inform their opinions and voting practices, and so forth. Historians better understand that work-related activities required massively increased uses of information to standardize work processes, run ever-larger organizations, and to provide new products and services. I, and others, continue to study those realms of information use, because they kept evolving and expanding across the past two centuries—a process that shows no signs of slowing. The historical evidence points, however, to several categories of information evident in use in this period in private life. These include consulting published—I call it organized—information on taking care of one’s home and raising children, sports and hobbies, vacations, and interacting with one’s church, community and non-profits organizations, and with government agencies at all levels. Participation in civic and religious institutions, in particular, represented a “growth industry” for information across the two centuries. Sales volumes for books and magazines provide ample evidence of this, just as today sales statistics do about PCs and smartphones the same. People also relied on information available in public spaces. These included public libraries, billboard advertisements and government signs and messages along roads and highways, both painted and digitized, advertisements on the side of buildings, a massive increase in the use of maps available from publishers, state highway departments, and as metal signs on roads. Users worried about privacy issues, a concern expressed in North America as early as the 1600s and still with us today.

Role of the Internet

But what about the Internet? By now the reader will have concluded that everything mentioned already had rapidly migrated to the Internet too, certainly by the very early 2000s. We have already created frameworks for phases in the development and use of the Internet, such that we accept as 1994-1996 as phase one of wide use (adoption or appropriation), 1997-1998 as a second phase with the ability to conduct interactive information exchanges, a third with the introduction of feedback loops that began in about 2002-2003, and yet another involving the adoption of social media applications soon after. Each had its applications: Phase 1 with product brochures, mailing addresses, telephone numbers and some e-mail; Phase 2 intranets, databases, order taking, and organizational news; Phase 3 seeking feedback, customer engagement, business partner collaboration, and in Phase 4 posting of personal information (remember the photographs of cats on Facebook?), communities of practice and customers sharing information, including churches, civic organizations and clubs, and the rise of information analytics. Historical data documented the rapid diffusion of these practices, such that over half the world today uses the Internet to share information (more on that later). Usage began a new central facet of people’s daily lives.

Because we are discussing the Internet’s use, the two most widely sought-after Internet-sourced information in its early stages that continue to the present is political information and even more about pornography and health. Increasingly, too, people seek out games and always “how to” advice. Libraries became spaces one could go to for access to the Internet.  Starting in 2007, people across the word were able to access information quicker and more often than before due to the introduction of the smartphone. 

In All the Facts we published a photograph of a public library in San Antonio, Texas, from 2013 that had no books; rather, it looked like an Apple Store with rows of screens. Today, such spaces are common in most public, school and university libraries in scores of countries. Increasingly since the early 2000s, people received growing amounts of news from sites on the Internet and today, news aggregators pull that together by a user’s preferences for topics and timelines. Religion and raising children are widely covered by information sources on the Internet. In fact, by about 2015 the public expected that any organization had to have a presence on the Internet: civic organizations, every government agency one can imagine, schools, universities, industry and academic associations, stores (including brick-and-mortar versions), political parties, clubs, neighborhood associations, and even children’s playgroups. I found few exceptions to this statement when writing All the Facts.

Historians began to catalog the types of information that became available from these organizations, beginning in the 1950s. Following the lead of librarians who had started the process that we are familiar with today in the 1800s, these included types of ephemera (e.g., books, magazines, journals) and by topics (e.g., physics, history, economics). Historians are now beginning to go further, such as William Aspray and I with our current research about the types of fake information and their impact on truth and authenticity, others exploring what information people rely upon sought through the Internet, and too, how people use information on social media.

As to category of information: for example, by 2014 the Ford Motor Company was providing online information about the company, news, its products, role of innovation, people and careers, media postings, space for customer feedback, contact addresses, stock data and investor facts, a social media sub-site, facts about customer support, automotive industry facts, and declarations about privacy policies. Meticulously documenting these categories of information for thousands of such organizations demonstrates the diversity—and similarity—of types of information that one came to expect. Note, however, that the information and functions cataloged about Ford had been available in paper-based forms since the 1920s, just not as easily or quickly accessible.

Figure 3: Nurse using punch cards (date unknown).
Figure 3: Nurse using punch cards (date unknown).

Returning to the pre-Internet era, think in terms of eras (phases) by going beyond when some form of ephemera became available. The ephemera or technologies themselves added diversity, convenience, speed, less expensive communications, and capability of moving ever-increasing volumes of information. Historians have done considerable research on these five features. However, information historians are just beginning to realize that by focusing their concerns on the information itself, pivoting away from the technologies themselves (e.g., books and computers) they see the endurance of some topics—access to medical information, other facts about raising children, or cooking recipes—regardless of format or technology used.

Thinking this way expands our appreciation for the extent of a society’s use of information and just as relevant, how individuals did too. In a series of books produced by Aspray, one could see how data-intensive the lives of people of all ages, socio-economic status, and interests became over time. I have argued in All the Facts and elsewhere that this kind of behavior, that is to say, ever-increasing reliance on organized information, had been on the rise since the early 1800s.

Recent Findings and Thinking

While All the Facts lays out the case for how we could come to the conclusion that we lived in yet a second information age—not THE Information Age of the post World War II period—that book was published in 2016 and so much has happened since then. Rapid changes in realities facing historians of information keep pounding the shores of their intellectual endeavors on three beaches: Internet usage, fake news and misinformation, and the changing forms of information.

In 2021 the Pew Research Center reported that 95 percent of American adults living in urban centers used the Internet, 94 percent of suburban and 90 percent of rural residents. In comparison to usage in 2015, when writing All the Facts wrapped up, urbanites were at 89 percent, suburbanites at 90 percent, and rural residents at 81 percent. Since 2000, users doubled as a percent of the total population. The overall number of Americans using the Internet in 2021 had reached 93 percent of population. Smartphone usage also increased, now one of the top drivers of Internet usage, thanks to both the increased availability and affordability of this technology. Similar overall statistics could be cited for other OECD, Asian, and South American societies. Convenience and affordability combined are driving use all over the world, no longer just in the wealthiest societies.

Other surveys conducted in the United States by the Pew Foundation reported that over 70 percent of residents thought the information they obtained was accurate and trustworthy in 2012, just before the furor over misinformation became a major issue of concern in American society expressed by both the politically energized Right and Left, and by students of misinformation and many in the media and in senior government positions. But the types of information people sought were the same as in prior decades.

The problems survey respondents expressed emanated from where fake news or misinformation resided. First, fake news and misinformation was not constrained to sources on the Internet; these appeared in books, television programs, magazines, and radio programs, often promulgated by agents operating across multiple digital and paper-based platforms. Information scholars are increasingly turning their attention to this problem, as have Aspray and I, reporting our results in a series of books and papers. However, as he and I have emphasized and documented, this has been a concern and overt activity since the eighteenth century.

In Fake News Nation (2019) we focused largely on political and industry-centered examples, while in a sequel, From Urban Legends to Political-Fact Checking: Online Scrutiny in America (2019) we began documenting the nation’s response to this growing problem. The physical world’s battles over politics and such issues as the role of tobacco, oil, and environmental damage had moved to the Internet, but also represented terrain fought over long before the use of the web. If anything, the role of misinformation has spilled over into myriad issues important to today’s citizens: health, vaccines, historical truths, racism, product endorsements and descriptions, among others. Trusted sources for impartial news and information competed for attention with mischievous purveyors of misinformation and people at large opining on all manner of subjects. These activities disseminating false or misleading information represent a new development of the past decade because of their sheer volume of activity, even though their patterns are becoming increasingly familiar to historians studying earlier decades, even centuries.

But perhaps for historians the most interesting new research interest is the nature of how information changes. To make All the Facts successful, it was enough and highly revelatory to document carefully the existence, extent of, and use of information across essentially all classes, ethnic and racial groups and ages, and to present a framework for gaining control over what otherwise were massive collections of organized information. That exercise made it possible to argue that modern society (i.e. since at least the start of the Second Industrial Revolution) had to include on any short list of research priorities the role of information in all manner of activity. During that project, it became evident, however, that information itself (or, what constituted information) was changing, not simply increasing or becoming more diverse and voluminous. Second, that transformation of information and new bodies of fact were leading to the emergence of new professions and disciplines, along with their social infrastructures, such as professorships and associations and literature.

IBM's Type 070 vertical sorters, ca. 1910s.
Figure 4: IBM's Type 070 vertical sorters, ca. 1910s.
Courtesy of IBM archives.

For example, regarding changing information, it became increasingly electrified, beginning with the telegraph in the 1840s and today the “signals” of which computer scientists and even biologists explore. There are biologists and other scientists who argue that information is a ubiquitous component in the universe, just as we have accepted that same idea regarding the presence of energy. Intelligence could no longer be limited to the anthropomorphic definitions that humans had embraced, emblematically called artificial intelligence. Trees communicate with each other, so do squirrels and birds about matters relevant to their daily lives.

Regarding the second point—development of new professions—before the 1870s there was insufficient knowledge about electricity to create the profession of electrician, but by the 1880s, it existed and rapidly developed its own body of information, professional practices, and rapidly became a licensed trade. In the years that followed, medical disciplines, plumbing, accounting, business management, scientists in all manner of fields, even later airplane pilots, radio engineers, and astronauts became part of modern society. They all developed their associations, published specialized magazines and journals, held annual conventions and other profession-centered meetings, and so forth. Probably every reader of this essay is a product of that kind of transformation.

Prior to the mid-nineteenth century, most professions had been relatively stable for millennium, such as the percent of populations engaged in subsistence agriculture, law, religion, warfare, and the tiny cohort of artisans. That reality has been thoroughly documented by economic historians, such as by Angus Maddison in his voluminous statistical collections (2005, 2007), all of whom pointed out that national income levels, for example, or increase in both economic productivity and population that radically did not change until more and different information began arriving. This was not a coincidence.

Understanding how information transformed and its effects on society is a far more important subject to investigate than what went into All the Facts because, like the investigations underway about misinformation, we are reaching into the very heart of how today’s societies are shaped and function. The earlier book was needed to establish that there was a great deal more going on that could be communicated by historians than by limiting their studies to the history of books or newspapers, or to the insufficient number of studies done about academic and discipline-centered institutions.

Man looking at punched paper tape ca. 1960.
Figure 5: Man looking at punched paper tape ca. 1960.
 Courtesy of Charles Babbage Institute Archives.

Now we will need to explore more carefully how information changed. I propose that this be initially done by exploring the history of specific academic disciplines and the evolution of their knowledge base. That means understanding and then comparing to other disciplines the role of, for instance, economics, physics, chemistry, biology, history, engineering, computer science, and librarianship. This is a tall order, but essential if one is to understand patterns of emerging collections of information and how they were deployed, even before we can realistically jump to conclusions about their impact. Too often “thought leaders” and “influencers” do that, in the process selling many books and articles but not with the empirical understanding that the topic warrants.

That is one opinion about next steps. Another is that the democratization of information creation and dissemination is more important. The argument holds that professionals and academics are no longer the main generators of information, millions of people instead. There are two problems with this logic, however. First, such an observation is about today’s activities, while historians want to focus on earlier ones, such as information generation prior to the availability of social media. Second, there is a huge debate underway about whether all of today’s “information generators” are producing information, misinformation, or are simply opining. As a historian and an avid follower of social media experts, I would argue that the issue has not been authoritatively settled and so the actions of the experts still endure, facilitated by the fact that they control governments, businesses, and civic organizations.

I am close to completing the first of two books dealing precisely with the issue of how information transformed. It took me 40+ years of studying the history of information to realize that understanding how it changed was perhaps the most important aspect of information’s history to understand. That realization had been obfuscated by the lack of precision in understanding what information existed. We historians approached the topic in too fragmented a way; I am guilty, too, as charged. But that is not to say that the history of information technology—my home sub-discipline of history and work—should be diminished, rather that IT’s role is far more important to understand, because it is situated in a far larger ecosystem that even transcends the activities of human beings.


Bibliography

Aspray, William (2022). Information Issues for Older Americans. Rowman & Littlefield.

Aspray, William and James W. Cortada (2019). From Urban Legends to Political Factchecking. Rowman & Littlefield.

Aspray William and Barbara M. Hayes (2011). Everyday Information. MIT Press.

Bakardjeva, Maria (2005). Internet Society: The Internet in Everyday Life. Sage.

Blair, Ann, Paul Duguid, Anja Silvia-Goeing, and Anthony Grafton, eds. (2021). Information: A Historical Companion. Princeton.

Chandler, Alfred D., Jr. and James W. Cortada, eds. (2002). A Nation Transformed by Information. Oxford.

Cortada, James W. (2016). All the Facts: A History of Information in the United States Since 1870. Oxford.

Cortada, James W. (2021). Building Blocks of Society. Rowman & Littlefield.

Cortada, James W. (2004-2008). The Digital Hand. Oxford.

James W. Cortada (2020). Living with Computers. Springer.

James W. Cortada (2002). Making the Information Society. Financial Times & Prentice Hall.

Cortada, James W. and William Aspray (2019). Fake News Nation. Rowman & Littlefield.

Gorichanaz, Tim (2020). Information Experience in Theory and Design. Emerald Publishing.

Haythornwaite, Caroline and Barry Wellman, eds. (2002).  The Internet in Everyday Life. Wiley-Blackwell.

Maddison, Angus (2007). Contours of the World Economy, 1-2030 AD. Oxford.

Maddison, Angus (2005). Growth and Interaction in the World Economy: The Roots of Modernity.

Ocepek, Melissa G. and William Aspray, eds. (2021). Deciding Where to Live. Rowman & Littlefield.

Zuboff, Shoshanna (2019). The Age of Surveillance Capitalism. Public Affairs.

 

James W. Cortada (February 2022). “What We Are Learning About Popular Uses of Information, The American Experience.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 19-31.


About the authorJames W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing in business. He is the author of IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). He is currently conducting research on the role of information ecosystems and infrastructures.

  


Editors’ note: This is a republication of an essay (the second one) from a newly launched blog of essays Blockchain and Society by CBI Director Jeffrey Yost. As a one-time only crossover at the launch of the blog and site, Interfaces is republishing an essay Yost wrote on gender inequity and disparity in participation in the development and use of cryptocurrency. This one-time republication is to introduce Interfaces readers to the blog and its topic is an especially good fit with Interface’s mission. Please consider also subscribing to the blog https://blockchainandsociety.com/ 

 

Few Women on the Block: Legacy Codes and Gendered Coins

Jeffrey R. Yost

Abstract: Despite major differences in levels of participation in computing and software overall, the decentralized cryptocurrency industry and space is far more skewed with roughly 90 percent men and 10 percent women (computer science overall is around 20 percent women). This article explores the history of gender in computing, gender in access control systems, gender in intrusion detection systems, and the gender culture of Cypherpunks to historically contextualize and seek to better understand contemporary gender disparity and inequities in cryptocurrency.

PDF version available for download.

 

Given decentralization is at the core of the design and rhetoric of cryptocurrency projects, the field often highlights, or hints at, small to mid-sized flat organizations and dedication to inclusion. Crypto coin and platform projects’ report cards on diversity, however, are uneven. While an overall diversity of BIPOC exists in cryptocurrency, it is quite unequal, as the founding and leadership of Bitcoin (team, the creator is anonymous) and the top 30 altcoins (alternative to Bitcoin) is disproportionately white North Americans, Europeans, and Australians, along with East Asians. With gender, inequalities are especially prevalent, in participation and resources. A half dozen or so surveys I found, spanning the past few years, suggest (in composite) that women’s participation in the crypto workforce is at slightly less than 10 percent. There are few women on the block, far fewer percentagewise in cryptocurrency than the already quite gender skewed low ratios in computing and software. On the adoption side, twice as many men own cryptocurrency as women.

This essay, on women in cryptocurrency, concentrates on gender inequities, as well as intersectionality. It discusses early research in this area, standout women leaders, and organizational efforts to address gender imbalances and biases. It places this discussion in larger historical contexts, including women in computing, women in security, women in cryptography, and women in, or funded by, venture capital. It also highlights the rare instances of a female CEO of cryptocurrency. Achieving greater gender balance is a critically important ethical issue. It also is good business. Many studies show corporations with gender balance on boards and women in top executive positions outperform. My essay posits that historical, terminological, spatial, and cultural partitions and biases block gender inclusion and amplify inequality in cryptocurrency development, maintenance, and use.

Major Gender Disparities in Cryptocurrency

A major study by international news organization Quartz surveyed the 378 cryptocurrency projects between 2012 and 2018 that received venture capital funding (Hao, 2018). Many cryptocurrency projects do not have this luxury or take this path, as they raise funds from friends and family, bootstrap, or rely on other means at the start. Venture capital funded projects tend to have greater resources and key connections to grow. Most of the largest coin projects have taken on venture capital support at some point in their young histories. It is self-reinforcing as rich projects tend to grow richer through R&D and marketing, and through the momentum of network effects, Metcalf’s Law (Value of a Network = N of Users-Squared), and under-resourced coin projects, often cease within several years as capitalizations descend to near $0.

Of these 378 venture funded projects, only 8.5 percent had a woman founder or co-founder. Venture capital (VC) is dominated by men, about 90 percent, and in terms of partners and senior positions at major VC firms, disparities are even starker (as reported in NBER Digest 9/2017). The venture domain is also very heavily biased toward funding of projects of white male entrepreneurs, and this is even more skewed in terms of capital offered or deployed. To illustrate, a study by the software and finance data firm Pitchfork found that in 2018 women founders received just 2.3 percent of total venture capital funding raised in the crypto universe—reported on by Forbes (Gross, 2019).

In the information technology (IT) field broadly, roughly 18 percent of projects have a woman leader or co-leader. Even with this quite low percentage in IT, crypto is disturbingly far lower than this, in fact, it is well less than half of that level.

On the adoption and use side, and unlike with BIPOC, where adoption nearly doubles that of whites in the US (participation rate of owners, at any level, so this is not crypto wealth), women holders of crypto are only half that of men. Men are two-thirds of crypto holders/users and women are just one-third.

Looking Backward at Backward, Gendered Computing Cultures

: Women plug board programming
Figure 1: Women plug board programming,

Computing is a field that has had substantial and important technical contributions by women from the start. This dates to the women who programmed the ENIAC, the first meaningful electronic digital computer, in mid 1940s to early 1950s. At the same time, the field and the industry has been held back by discrimination in hiring, and there have been heavily male gendered environments from the beginning. This has been true in the U.S. as documented in the tremendous scholarship of Janet Abbate (Abbate, 2012) and others, and for the United Kingdom, in the masterful work of Mar Hicks (Hicks, 2017).

Gender in IT remains substantially understudied, especially in some geographies. There is also a dearth of literature regarding some industry segments, corporations, and organizations on the production side, as well as much in the maintenance and use domains. Discriminatory practices against women and transgendered people have been and remain pronounced in the military, national and local governments, industry, national laboratories, nonprofits, universities, and beyond.

Thomas Misa’s path breaking, deeply researched work, published in Communications of the ACM and part of a larger book project, indicates there was not a golden age of women's participation in the early years, but continuously evident but steady low and range bound participation--between the high single digits to the upper teens percentwise--from the middle 1950s to the middle 1970s (Misa, 2021). His research draws on the best available data for the early years, user groups (and for the above I am giving extra weight to IBM’s User Group SHARE, Inc. in combining Misa’s totals for groups since it was 60 percent plus of the industry and its nearest competitor was always under 15 percent). Following this two-decade span, was a gradual upward trend that ramped up in the 1980s when late in the decade women's participation in computer science education and the workforce peaked at 37 to 38 percent. In 1990s it fell sharply, as Misa, and other authors, explored in his important edited volume, Gender Codes (2010).

Participation, environment, culture, and advancement are all important. My own work has contributed to show gender inequality in promotion to leadership roles in software and services companies in the US, especially pre-2000 (Yost, 2017). In recent years and decades, women's participation as computer science majors at US universities has been hovering around 20 percent. Why the huge drop and recovery to only about half the former peak? The widespread adoption of PCs, gendering of early PCs, gendered gaming (especially popular shooter games), rise of male geek culture, inhospitable environments for women are among the likely factors, as the publications of Abbate, Hicks, Misa, and others richly discuss. More attractive opportunities in law, medicine, and business outside IT likely are factors too, as participation in these areas rose as computing participation fell. And far from being free of discrimination, on a relative basis, these professional areas may have had less.

Gender in Different Computer Security Environments

In co-leading a major computer security history project for the National Science Foundation (NSF Award Number: 1116862) a half decade ago (and I am thrilled, just yesterday, we received another multiyear grant from NSF, on privacy and security, a CBI project I am co-leading with G. Con Diaz), I published “The March of IDES: A History of Intrusion Detection Expert Systems.” (Yost, 2015). I highlighted gender in one important area of computer security, intrusion detection. Early intrusion detection involved manual printing out logs and painstaking review of the printouts as security officers, auditors, and systems administrators' (who did this work) eyes glazed over. As computer use grew, fan folded printouts would grow in multiple stacks toward the ceiling at many computer centers, it soon overwhelmed. In the 1980s automated systems were developed to flag anomalies to then be selectively reviewed by humans. In the 1980s the artificial intelligence of expert systems was first applied in pioneering work to help meet growing challenges (Yost, 2016).

Professor Dorothy Denning, Naval Postgraduate, 2013. Led IDES at SRI.
Professor Dorothy Denning, Naval Postgraduate, 2013. Led IDES at SRI.

The National Security Agency had a very important pioneering research program in the 1980s and 1990s to fund outside intrusion detection work, called Computer Misuse and Anomaly Detection, or CMAD. This program was led by Rebecca Bace. The dollar amounts were not huge, they did not need to be, and Bace, with great energy and skill, expertly worked with the community to get much pioneering work off the ground toward impactful R&D, at universities, national labs, and nonprofit research corporations like SRI. In conducting oral histories with Dorothy Denning, Teresa Lunt, and Becky Bace (full text of these published interviews are at the CBI Website/UL Digital Conservancy), I got a sense of the truly insightful scientific and managerial leadership of the three of them (Yost, 2016).

The accelerating, sometimes playful, but also quite malicious and dangerous hacking of the 1970s and 1980s (for those Gen Xers and boomers reading this, remember War Games, and some non-fictional scares written about in newspapers?) became a serious problem. The US Government often was a core target of individual and state sponsored hackers in the Cold War. This fostered a need (and federal contracts) for this field of intrusion detection systems. As such, and over time, increasingly there were funds and contracts to complement the modest grant sizes, often under $100,000, provided from Bace’s NSA (CMAD) program.

This resulted in essentially a new computer science specialty opening in the 1980s and 1990s at universities, a subset of computer security, intrusion detection. There were some standout male scientists also, but at the origin, and for years to follow, women computer scientists disproportionately were the core intellectual and project leaders. Women scientists such as Denning, Lunt, Bace, as well as Kathleen Jackson (NADIR at Los Alamos) and other women scientists headed the top early projects and provided the insightful technical and managerial leadership for this computer security and computer science specialty to thrive (Yost, 2016).

Rebecca "Becky" Bace (1955-2017).
Rebecca "Becky" Bace (1955-2017).

Another computer security area I researched for NSF was access control systems and standards. This was all about knowing how operating systems worked, secure kernels, etc. It was by far the largest computer security field in terms of participants, papers, funding, and standard setting efforts, and it was overwhelmingly male. Operating systems (OS) was an established research area prior to access control becoming a key domain in it. Access control as an area within the larger OS domain was in response to breeches in the earliest time-sharing systems in government and at universities. MIT’s early 1960s pioneering Compatible Time-Sharing System (CTSS) had little security, with its successor of the late 1960s and beyond, MULTICS, project leader Fernando Corbato, and other top computer scientists at MIT like Jerome Saltzer, made security design central to the effort.

Operating systems research and development, in academia, industry, the DoD, DOE, etc. was overwhelmingly male and very well-funded. It followed that access control became an overwhelmingly male specialty of computer security and received strong federal research program and contract support.

Reflecting on this prior scholarship—women as the key leaders of the new (1980s) intrusion detection area and men the leaders of many of the most important operating system and access control projects—I have been pondering whether it provides any context or clues as to why, to date, the founders of cryptocurrency projects have largely been men? At very least I think it is suggestive regarding established and new specialties, connections between them, historical trajectories, and gender opportunities and participation. A wholly new area, when a dominant more visible and better funded other area exists, can lead to greater opportunities at times for newcomers to the new area of security, including for women.

Following from this, I have begun to consider the related question of: to what extent is cryptocurrency a new area offering new demographics and dynamics, and to what extent is it a continuation of the evolving field of cryptography? And how was this influenced by older cryptography and its renaissance in impactful new form, its new direction?

In the mid-1970s and 1980s with the emergence and rapid growth of a new form cryptography, public key developed a strong intellectual and institutional foundation, especially thanks to the work six men who would later win the Turing Award, early crypto pioneers Whitfield Diffie and Martin Hellman (and the “New Directions…” 1976 landmark paper); Ron Rivest, Adi Shamir, Leonard Adleman, the three from MIT known as RSA; and Silvio Micali, also MIT. Rivest, Shamir, and Adelman in addition to the RSA Algorithm would start a company RSA Data Security, and it would launch a pivotal event, the RSA Conference, and spin off an important part, authentication, as Verisign, Inc. After some initial managerial and financial stumbles, highly skilled James Bidzos would successfully lead RSA Data Security, and as Chair of the Board, Verisign.

In addition to his Turing Award, Micali had earlier won the Gödel Prize. In 2017, Micali became the founder of a now more than $10 billion “Proof-of-Stake” altcoin project Algorand and along with running this, he is a Computer Science Professor at MIT. Algorand offers much in being environmentally sound (low energy to mine), scalable, and possesses strong security.

Cryptocurrency: Both a New and an Older Space

The excellent book by Finn Brunton, Digital Cash (2019) and other articles and books addressing the cypherpunks—the cryptographic activists focused on privacy who sought to retake control through programming and systems—overwhelmingly have male actors. In addition to Diffie and Hellman, appropriately revered for inventing public key (in the open community), most of the high profile cypherpunks are male—Timothy May, Eric Hughes, John Gilmore, etc.

Yet it was one of the co-founders, Judith Milhon, known as “St. Jude,” who coined the term Cypherpunk. The cypherpunks, who journalist Steven Levy referred to as the “Code Rebels” in his book Crypto, were inspired in part by the work of Diffie and Hellman. The response of the National Security Agency (NSA) was to try to prevent private communications it could not surveille, and thwart or restrict development and proliferation of crypto it could not easily break. This included its work with IBM to keep the key length at a lower threshold for the Data Encryption Standard, or DES. This made it subject to the “brute force” of NSA’s unparalleled computing power. Further, it is widely believed that NSA also worked to have a back door in IBM’s DES, code containing a concealed and secret way into the crypto system, to enable surveillance of the public.

St. Jude: A Creative Force Among Early Cypherpunks

Born in Washington, DC in 1939, St. Jude was a self-taught programmer, hacker, activist, and writer. As a young adult she lived in Cleveland and was a part of its Beat scene. She volunteered in organizing efforts and took part in the Selma to Montgomery Civil Rights March in 1965, for which she was arrested and jailed. Her mug shot is a commonly published photo of her, symbolic of her commitment to civil rights throughout her life. She moved from the East Coast to San Francisco in 1968, embracing the counterculture movement of the Bay Area. In the late 1960s she was a programmer for the Berkeley Computer Company, an extension from the University of California, Berkeley’s famed time-sharing Project Genie.

Judith "St. Jude" Milhon
Judith "St. Jude" Milhon, 1965, Montgomery, Alabama Police Department. Mug Shot.

Active in Computer Professionals for Social Responsibility (CBI has the records of this important group), she was an influential voice in this organization. She was also one of the leaders of Project One’s Resource One, the first public computer network bulletin board in the US, which existed in the San Francisco area. She was known for her strong belief that network computer access should be a right not a privilege. She was an advocate for women technologists and acutely aware of the relative lack of women "hackers.” (the term meant skilled programmer, not necessarily its later meaning associated with malicious hacks).

St. Jude was a widely recognized feminist in computing and activism circles. She was among the founders of the "code rebels" and in giving the group the name that stuck, cypherpunks, it is suggestive of her having a voice in this male space (her writing and interviews suggest this strongly as well), but this was not necessarily (and probably not indicative of) a general acceptance of women in the group. Some of St. Jude’s views were at odds with academic feminism and gender studies areas but may have fit more with the cypherpunks’ ethos. She abhorred political correctness she saw in academic communities and educational and political elites. She believed technology would fix many problems, including social problems of gender bias and discrimination, “Girls need modems,” was her answer, and oft repeated motto and rallying statement. It was what she felt was needed to level the playing field (Cross, 1995).

The lack of women among the cypherpunks, St. Jude’s great frustration more women did not adopt her active hacker approach and ethic, likely suggests a dominant male and biased culture that only opened to certain great talent, creativity, and interactive style she possessed.

St. Jude became a senior editor and writer at Mondo in 2000, a predecessor publication that Wired drew from in style in writing about information technology. She also was lead author (with co-authors R.U. Sirius and Bart Nagel, Random House, 1995) of The Cyberpunk Handbook: The Real Cyberpunk Fakebook, (the subtitle a bit prophetic without intent of terminology given later formed Facebook and its profiteering off fake news) and along with her journalism she wrote science fiction. Judith “St. Jude” Milhon, passed from cancer in 2003.

Cyberpunk magazine

There definitely is a need for more historical research on gender and the cypherpunks as well as the sociology of gender in recent cryptocurrency projects, related intermediaries, and investors and users. Rudimentary contours nonetheless can be gently and lightly sketched from what is known. Names from the cypherpunks mailing list appearing in articles and handful of books addressing the topic are about 90 percent male. At the start St. Jude was the sole woman as a part of this core group. If limited to those directly interested and investigating possibilities with digital currencies before the advent of Satoshi Nakamoto’s Bitcoin in 2008, it was even more male dominated.

As such, women role models were very few in early public key efforts, and more broadly among the code rebels or cypherpunks overall. There are deep connections of the cypherpunks to Bitcoin, but also other early coins as well. Those young crypto entrepreneurs and activists of recent years and of today of course were never a part of the group, but nonetheless often grew an interest in it. They were motivated by its past activity, and had reverence for Tim May, Ed Hughes, John Gilmore, and others. This perhaps led to fewer opportunities perceived to be, or, open to women, and likely less of a recognition and consideration of pursuing this space among women.

Of the two exceptions of women in the upper echelons of cryptocurrency, one came from an equally talented and active wife and husband team (the Breitmans). The other was a truly exceptional individual, possibly deserving the term genius, who like Vitalik Buterin (Ethereum’s lead founder) achieved amazing things at a young age, was exposed to potential need for crypto, and was driven by the goal of socially impactful career success.

 

Kathleen Breitman, Tezos, on Centre Stage
12 November 2021; Kathleen Breitman, Tezos, on Centre Stage during day one of Web Summit 2021 at the Altice Arena in Lisbon, Portugal. Photo by Harry Murphy/Web Summit via Sports file.

Tezos Co-Founder Kathleen Breitman  

There are more than 14,000 altcoins, the top 30 are capitalized at $4 billion or more currently (value of circulating coins), and those not in the in the top 200 generally are less than $40 million in capitalization and in a precarious spot if they do not rise at least five-fold in the coming years. Many in the investment community have pejoratively labeled lesser capitalized altcoins (and for some Bitcoin enthusiasts, all altcoins) as “sh*t coins.” The cryptocurrency industry has resulted in a growing specialized trade and investment journalists, following Ethereum founder Vitalik Buterin’s initial pre-Ethereum pursuit of coin journalism, in creating Bitcoin Magazine. These include journalists, analysts, and evangelists (often all wrapped into one) in e-magazines such as The Daily HODL and Coin Telegraph, two of the more respected larger publications among many others. They write mainly on the top 50 coins, what most in the investment community cares about, and thus are writing very heavily about men, a reinforcing mechanism hindering perceived and real opportunities for women.

In the top 30 coins, only two have a woman founder or principal co-founder, none have a sole woman founder or sole leadership team in the top 30, and many are all male at the top. A few coins have a longer list in the founder’s group, upper single digits. The two principal co-founders of major altcoins are Kathleen Breitman of Tezos and Joyce Kim of Stellar Lumens. Tezos is $4 billion in capitalization and ranks 28th in altcoin cap., Stellar Lumens is at $4.8 billion and ranks 22nd.

The coin project “Proof-of-Stake”-modeled Tezos, was co-founded by Kathleen Breitman and her husband Arthur Breitman in 2018, along with a Tezos foundation created by Johann Gevers. Kathleen Breitman studied at Cornell University before joining a hedge fund and working as a consultant, Arthur Breitman is a computer scientist who worked in quantitative finance prior to Tezos. A dispute with the foundation and Gevers led the Breitmans into a lawsuit which delayed the launch and hurt the project, ultimately a payout settled the matter. Kathleen Breitman has stated that she has been underestimated in the industry as some assume her husband is the real creator when they very much co-created Tezos, technically and organizationally.

Stellar Lumen’s Co-Founder Joyce Kim

To say Joyce Kim’s career is impressive is an understatement, stellar is in fact quite fitting. Kim, a second-generation Korean American, grew up in New York City attending High School for the Humanities and graduated from Cornell University at age 19. Kim followed this with graduate school at Harvard University and Law School at Columbia University. She became a corporate M&A attorney as she also did pro bono work for Sanctuaries for Families and for the Innocence Project. Back in high school she witnessed the trouble and expense of lower income people globally sending money to families, it also was likely evident in work at Sanctuaries for Families.

After success with co-founding Simplehoney, a mobile ecommerce firm, as well as founding and serving as CEO of a Korean entertainment site, she became one of the rare (percentagewise) women in venture capital working at Freestyle Capital. Focusing on the power of social capital, she soon partnered with stable coin (crypto pegged to government fiat currency) Ripple founder Jed McCaleb in 2014 to found open source blockchain-based coin, network, and platform project Stellar Lumens, an effort of the nonprofit Stellar Development Foundation.

Kim’s motivation and vision for Stellar was driven by the fact that 35 percent of women adults globally (World Bank statistics) do not have a bank account despite many of them saving regularly. As such, they have trouble protecting, sending, and receiving funds, difficulties paying bills, helping family. Stellar as a platform and network allows people to send funds at low costs and low sums as easily as sending a DM or email. With 6.3 billion in the world with smartphones, and perhaps as many as 20 percent of these people without a bank account Stellar Lumens addresses a critical problem and serves a great societal need. The coin Celo is also in this very important area, making a positive difference in the world. Stellar Lumens (and Celo) change lives and empower lower income people, especially women as women are less likely than men to have bank accounts due to discrimination and lesser resources. As Kim told Fast Company in an interview shortly after the founding, with Stellar, she “found her true north.” (Dishman, 2015). In addition to Stellar Lumens, Kim recently served as a Director Fellow at the famed MIT Media Laboratory.

In addition to the prestigious MIT senior fellowship, Kim has moved on from Executive Director of Stellar, and the day-to-day of the coin and is having an impact socially and financially in the venture capital arena in crypto, an area that could benefit from more women. Kim is the Managing Partner at SparkChain Capital.

Mining Deeper: Guapcoin’s Tavonia Evans and the African Diaspora Community

At coins not in the top 30, 50, or 100 in capitalization projects teams work toward and hope their technology and mission will one day carry them to much higher levels. There are people and projects behind the coins and that is sometimes disrespectfully forgotten when investors or others refer to coins and projects in derogatory terms.

I wanted to research a coin in the middle third of the 14,000 or so coins out there in current capitalization and was deeply moved by learning about Guapcoin and its tremendous mission. It was founded in 2017 by African American data scientist Tavonia Evans. Evans, a mother of eight, had founded a peer-to-peer platform company earlier but was unsuccessful at getting venture funding. Venture capital is not on a level playing field and far less than one percent of venture funding goes to African American women led businesses. At this intersection--African American women--societal bias in finance is particularly pronounced.

Inability to get funding for that business led her to move on and inspired her Guapcoin project, a cryptocurrency focused on addressing “the economic and financial concerns of the Global African Diaspora community.” Evan’s vision with Guapcoin is beyond merely being a means of exchange for the Global African Diaspora community, and for “Buying Black,” but also a property protection mechanism that combats gentrification, documents all forms of property ownership (from real estate, to copyright, to music licenses) so “the Black and Brown community will have its wealth protected by a transparent, immutable blockchain.”

In 2019, Evans and Guapcoin founded the Guap Foundation to permanently ensure the mission of the coin project is carried out. Many altcoins have associated foundations to both further and to protect the integrity of the mission for generations to come (guapcoin.org).

It is with amazing, social-oriented and green projects like Guapcoin, Stellar Lumens, and Celo that I realized my initial negative perspective of cryptocurrency several years back, because of my very critical views on the environmental impact of Bitcoin, was sorely misguided for many 2016 and later altcoins, and for 2015 Ethereum that is converting to Proof-of-Stake as a consensus model to become green.

 

Blockchain and the web

“Meetups” and Standout Early Scholarship on Gender and Cryptocurrency

There are a mere handful of published scholarly studies to date examining gender and cryptocurrency. One stood out to me in being especially compelling in its creative methodology, insights, and importance. Simon Fraser University’s Philippa R. Adams, Julie Frizzo-Barker, Betty B. Ackah, and Peter A. Chow-White designed a project where they engaged in participant observation and interaction with over a half dozen “Meetup” events that were primarily, or at least in part, marketed to women, often to educate, encourage, or address gender disparity in cryptocurrency. All of these were in the Vancouver, British Columbia, metropolitan area.

Adams and her co-authors do a wonderful job of interpreting, analyzing, and eloquently conveying the meaning of these events. Some meetups were well designed and executed to offer support to women and empower women in this new industry and space. Others were far less effective, succumbing to the challenges of "trying to support adoption of a new technology," or they ended up presenting more resistance than support. I urge you to read this excellent work of scholarship (P. Adam, et al.), the chapter is in the recommended readings volume edited by Massimo Ragnedda and Giuseppe Destefanis, 2019, which is an excellent book overall and one of the first quality social science books on emerging Web 3.0).

Educational and Empowerment Organizations and Looking Forward

In addition to meetup events that are local in origin, a growing number of nationwide education and advocacy support organizations by and for women in cryptocurrency have emerged. Some foster local meetup events others have other supportive programs.

In Brooklyn, New York, Maggie Love founded SheFi.org in seeing blockchain as a powerful tool for more inclusive and equitable finance tools and systems. It engages in education to advance understanding and opportunities for women in blockchain and decentralized finance.

Global Women in Blockchain Foundation is an umbrella international organization without a central physical headquarters (in the spirit of the technology and decentralization). It is designed to accelerate women’s leadership roles in blockchain education and technology. The websites for these two organizations can be found on this site in the list of organizations.

Efforts to reduce the tremendous gender gap in cryptocurrency development projects and especially founder roles and leadership posts is extremely important, both ethically, and for the creativity, success, and potential of this field. Further, blockchain, and applications in crypto, are the heart of Web 3.0, the future of digital technology. If the field remains 90 percent male it will hurt the field of IT greatly by further reducing overall women's participation in IT, given blockchain greater share of the whole of our digital world.

There is not only a large gender gap in computer science, but also in finance, hedge funds, and venture capital, all which accentuate imbalances in power and opportunity in favor of men in crypto. The VC gender gap is especially problematic as it reinforces hurdles to women and BIPOC, independently and especially at these important intersections, for both small companies and cryptocurrency projects.

Joyce Kim and her leadership at SparkChain, funding crypto is so refreshing. The firm's staff is greatly diverse, in terms of both gender and race and ethnicity. More women in the VC leadership world and VCs with a crypto focus is incredibly important. It is also critical that education in both high school and college does not directly, or indirectly and inadvertently, create gendered spaces favoring men, or those inhospitable to women.

The excellent study by the team at Simon Fraser University looking at cryptocurrencies, and other studies looking at finance and hedge funds, have identified jargon and terminological barriers to entry. In crypto the barriers are many, from outright gender bias, to clubhouses, to other restrictive spaces, but terminology and cultures of exclusion are especially powerful in hindering inclusion, both intentionally and unintentionally.

One motivation for this blog and site and especially the site’s inclusion of a historical glossary of terms (continually added to) and a Cryptocurrency Historical Timeline is to contribute in a small way to education and first steps to remove barriers or blocks to inclusion based on terminology and cultural elements important to communication in this area. Anyone interested in this area and devoting time to it will soon move far beyond these resources, but they might help understanding a bit initially, at least that is a goal. I also see these as tools that can greatly benefit from the community.

I am continually learning from readings, correspondence, and meetings with others in this space. I have added to the readings already from useful comments and suggestions people have sent me after my first post last week. I hope these sources accelerate as community-used and community-influenced tools and thus I very much encourage and welcome feedback. I will take the timeline and glossary, through additions and tweaks, thus, many editions or iterations, but for now it gets at some of the technical and cultural terminology and basics (Why does the mantra of HODL, Hold On for Dear Life, keep coming up as crypto coins currently plummet? The glossary provides a historical context).

[Republished with only slight adjustment from Blockchain and Society: Political Economy of Crypto (A Blog), January 25, 2022) http://blockchainandsociety.com

[Please consider subscribing to the free blog at the URL above]


Bibliography

Abbate, Janet (2012). Recoding Gender: Women’s Changing Participation in Computing, MIT Press.

Adams, Philippa R., Julie Frizzo-Barker, Betty B. Ackah, and Peter A. Chow-White (2019). In Ragnedda, Massimo and Giuseppe Destefanis, eds. Blockchain and Web 3.0: Social, Economic, and Technological Challenges, Routledge.

Brunton, Finn. (2021). Digital Cash: The Unknown History of the Anarchists, Utopians, and Technologists Who Created Cryptocurrency, NYU Press.

Celo Website. www.celo.org

Cross, Rosie (1995). “Modern Grrrl.” Interview with Judith “St. Jude” Milhon. Wired, February 1. www.wired/1995/02/st.-jude/

Dishman, Lydia. (2015). “The Woman Changing How Money Moves Around The World.” Fast Company February 6.

Hao, Karen. (2018). “Women in Crypto Are Reluctant to Admit There Are Very Few Women in Crypto.” Quartz (qz.com). May 5, 2018. https://www.qz.com

Hicks, Marie (2017). Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, MIT Press.

Guapcoin Website. www.guap.org

Gross, Elana Lyn. (2019). “How to Close the Venture Capital Gender Gap Faster.” Forbes, May 20.

Klemens, Sam. (2021). “10 Most Influential People in Crypto: Kathleen Breitman.” Exodus. August 3.

Misa, Thomas J., Ed. (2010). Gender Codes: Why Women are Leaving Computing, Wiley-IEEE.   

Misa, Thomas J. (2021). “Dynamics of Gender Bias in Computing.” Communications of the ACM 64: 6, 76-83.

St. Jude, R.U. Sirius, Bart Nagel (1995). The Cyberpunk Handbook, Random House.

Yost, Jeffrey R. (2015). “The Origin and Early History of the Computer Security Software Industry.” IEEE Annals of the History of Computing, 32:7, April-June, 46-58.

Yost, Jeffrey R. (2016). “The March of IDES: The Advent and Early History of the Intrusion Detection Expert Systems.” IEEE Annals of the History of Computing, 38:4, October-December, 42-54.

Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry, MIT Press.

                                                                                  

Jeffrey R. Yost (January 2022). “Few Women on the Block: Legacy Codes and Gendered Coins,” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 1-18.


About the Author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press, is PI of the new CBI NSF grant Mining a Useful Past: Perspectives, Paradoxes and Possibilities in Security and Privacy. He has published 6 books, dozens of articles, and has led or co-led ten sponsored projects, for NSF, Sloan, DOE, ACM, IBM etc., and conducted hundreds of oral histories. He serves on committees for NAE, ACM, and on two journal editorial boards.