Interfaces

Essays and Reviews in Computing and Culture 

Kids at computer

Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences. 

Co-Editors-in-Chief: Jeffrey R. Yost and Amanda Wick

Managing Editor: Melissa J. Dargay

Expand all

 


 

The Handbook cover
Figure 1: Cover of The Routledge Handbook of Information History.

The Handbook

It is a familiar adage that we are currently living in a new “information age,” defined by transformative information technologies and practices such as online shopping, social media, digital surveillance, information warfare, disinformation, and artificial intelligence. While these developments are often viewed as revolutionary, one should not view them as entirely unprecedented. Our editorial approach for The Routledge Handbook of Information History (Weller et al., 2026) was to show how previous eras, cultures, and societies also experienced significant “information ages,” albeit interpreted and understood in different terms from those of our own. The Handbook is not the first collection of scholarship in information history; indeed, as a concept, information history is several decades old (Stevens, 1986; Darnton, 2000; Headrick, 2000; Blair et al., 2021). However, it is the first to be published on this scale and use the term “Information History” as its titular focus, situating "information" within the historiography of the field, alongside long-view pieces, and empirical case studies of information practices throughout, and across, human history.

In making information a conscious and explicit tool of heuristic enquiry, scholars can better comprehend the complex and varied history of information across different time periods. Historicising information allows us to explore how different societies, cultures, or even individuals, understood information to be in different contexts and eras. However, an ongoing issue that has plagued the field of information scholarship is that of definition. The concept of “information” is fluid, shifting and changing according to the contextual ground. As editors of The Handbook we took the view that we must, as historians employing good historical practice, define information in terms of the context in which it was being explored, rather than offer one over-arching definition. It was still necessary, however, to provide some sort of focus, to avoid information being understood as anything or everything (Black et al.). The contributions to The Handbook were based on the premise that knowledge resides internally within the human mind. However, once such knowledge is communicated externally in some way, it becomes information. Thus, any such communication or sharing of information requires external representation, and, as such, requires information practices. Information practices have been critical to meeting the societal, political, economic, or military needs of the time, past and present, and when explored across two millennia and four continents, such as they are in The Handbook, a complex pattern of information practice becomes visible. 

Contributors to The Handbook were therefore invited to anchor the potentially nebulous phenomenon of information within everyday life. This allowed for a broad and diverse exploration of the relationship between multiple information practices and their broader societal, political, cultural, and technological contexts across numerous chronological and geographic discourses. This approach substantiates information history as a significant – and highly topical – subfield within the broader discipline of history. 

In assembling The Handbook, we prioritised inclusivity and diversity in all areas. This meant not only in thematic content, but also in contributors; a conscious effort was made to encourage a wide variety of perspectives, resulting in over forty emerging and established scholars from around the world, with women as a majority. We aimed to represent a wide range of academic backgrounds and perspectives, enabling powerful collaborative thinking about the historical dimensions of information. In addition, a significant proportion of the archival material consulted for The Handbook were from non-English language collections. The chapter subject matter explored information practices from Europe, Africa, Asia, as well as North and South America, reflecting the truly global scope of information history as a concept and subject matter, as well as an academic field. 

The Handbook was consciously structured in a different way to traditional information narratives. Our introductory chapter, forming Part I of the book, situated a grand narrative of information history and its historiography (Black et al.), while Part V offers a treatise on the raison d’être of the field (Mak). While conventional themes of information – dissemination, control, access, classification, control, the information state, and so on – are unequivocal throughout the collection, as editors we felt that focusing on these alone prevented a true reflection of the diversity, scope, and scale of information practices as evidenced throughout historical experience. Instead, chapters were structured into loose groups – made possible only in a book of this scale and scope – which allowed the invaluable opportunity to step back and observe common themes, or threads, connecting geographic areas, societies, and cultures, stretching across the past two millennia. In this, The Handbook is unique. 

Close up of an Inka style khipu.
Figure 2: Close up of an Inka style khipu. Source: khipu V A 66830, Staatliche Museum zu Berlin, Ethnologisches Museum, photograph taken by Lucrezia Milillo.

Visualising, Describing, Expressing

Part II of The Handbook explores the visual expression of information. In this, as throughout the book, many information practices appear to be extremely contemporary. Modern facial recognition software, and “attack journalism”, where an individual is publicly targeted and criticised or shamed, are instantly recognisable as informational discourses of the twenty-first century. But these also have long histories of similar practices. From early modern newsbooks to digital platforms, attack journalism can be understood as a representational culture of news and information, with profound implications for socio-political power and control (Usher). Likewise, eighteenth-century epistemes based on classification, measurement, and representation, the notion that information could be extracted from a human face, are long standing ideas of which more recent forms of AI and digital technology have appropriated under the aegis of state welfare or protection in times of conflict (one only has to think of ongoing worldwide controversies regarding migration and border control) (Higgs). Racialisation is evident in modern society in the way in which migrants, minorities, or communities can be portrayed, but such information practices are also evident in examples as diverse as the racialised language of colonial newspaper advertisements from the eighteenth and nineteenth centuries (da Silva Perez), or the rhetorical significance of Thomas Jefferson’s use of the terms “data”, “facts”, and “information” in understanding how datafication can affect relations of power and political governance through racializing categories and hierarchies (Adler). 

The recording and circulation of information on individuals has been embedded in human societies from the Roman Empire (Riggsby) to modern census returns or digital ID debates. All of these required information transactions and an emphasis on the role of individuals and their relationship with the state or governing bodies. Not only national information organisation, but also cultural collections are represented in The Handbook, from print encyclopaedias (Simonsen) to the possibility of a Semantic Web theorised against the backdrop of Paul Otlet’s Mundaneum at the turn of the twentieth century (van den Heuvel). These share the ideal, dating from the Enlightenment, of a repository of all of humanity’s knowledge accessible and visible in one place. Wikipedia, although a relatively recent phenomena in historical terms, is a dynamic digital manifestation of this same idea, following shared epistemological principles with Diderot’s eighteenth-century Encyclopédie (Luyt).

In other cultures and time periods, haptic information practice was evident. In the medieval West, for example, information transmission relied on the oral and the visible, leaving dynamic expressions and impressions which tied it to its human creators (Bedos-Rezak), or shown in the physicality of Andean khipus, used by the Inka civilisation of the pre-Hispanic Americas (Milillo and Hyland). Khipus, colourful knotted strings made from organic fibres, are physical artefacts, unconventional but highly complex and sophisticated mediums of information storage, sharing, and display, in use for over a millennia. Such sensory information practices remain vitally present today, such as through the use of tactile feedback in widening access to artefacts held in museums for example, or haptic assistive tools for the visually impaired which provide feedback through touch and motion stimulation at the fingers or hands, to suggest just two (Brewster, 2005; Jiang et al., 2025).

All of these examples from Part II of The Handbook, from antiquity to the very modern, represent a multiplicity of practices throughout history which visualise or describe information, often challenging conventional notions of information expression. They demonstrate that information practices are far from a modern phenomenon and instead have long-reaching precedents and histories which are being uncovered through the lens of information history.

Managing, Ordering, Classifying

Such longevity is continued in Part III of The Handbook, in which information practices were grouped around the idea of managing information. In contemporary terms, this is just as vital an issue as it was in the ancient world, with information collected for commercial purposes, or to justify state intervention, protection or civil surveillance, a defence used time and again by extreme right-wing groups throughout the twentieth century and beyond. Yet, the idea that a ruler – whether political or regal – should have access to information about their citizens in order to facilitate or defend decisions regarding the governance of the nation has ancient origins. During the Warring States period in China (c. 475–221 BCE), there was constant warfare between rival states vying for power and control. Equally important, however, was the idea that the common people should participate in the process of information collection to allow a balanced and peaceful state (Robinson). This is also evident in modern Singapore, as part of the city’s longer colonial and postcolonial history, in which informatic governance allowed the government to be at the centre of information networks, enabling centralised control over the structure of information exchange (Stevens and Harjani). 

The twentieth century saw an acceleration in the ways in which states used and ordered information about its citizens, exacerbated by warfare and the growing weaponisation of information. In Britain, the Wartime Social Survey of the 1940s was created as part of a crisis response to the Second World War providing government with the information required for forming and administering political policy. The impact of such social research highlighted contemporary concerns about privacy and surveillance and allowed the voice of the everyday man and woman to reach policy makers (Irving). In the twenty-first century, one might argue that social media and digital platforms serve a similar, if less formal, purpose.

Towards the end of the pre-computer era the information machine of the British Inter-Service Topographical Department illustrated how information was collected, processed, organised and circulated during the Second World War (Black), but, in times of threat, what information is held back can often be as significant as what is disseminated. Misinformation, control, access and the management of sensitive information is as vitally important now as ever. With long historical precedents, the history of access to information about nuclear weapons during the Cold War, to give one example, has resonance with twenty-first century governments and questions of who mediates how and what information should be shared, how it might be utilised, and how we discern between authoritative and unreliable or erroneous information (Farbøl and Sylvest)? 

The management of information has been critical not just for state formation or weaponised for war, but also within business and industry, from the information factories of the Dutch East India Company during the seventeenth century (Szommer), to the modern computer (Aspray). Information “technologies” appear irrevocably linked with computing, but the practice of classifying information, whether in libraries post-industrialisation (Attar), twentieth-century archives (Shepherd), or even the Aristotelian categories of the Middle Ages (O’Daly) reinforce the proposition that technologies of information are not just digital, but can be understood as any practice which organises or makes sense of a collection of information.

Such collections can also reveal the ways in which gender could be appropriated by information practices. During the late Victorian and Edwardian periods, information was collected on the physical forms and bodies of women, themselves becoming “information objects” as part of bigger overlapping discourses of industrialisation, empire, eugenics, and the rise of the women’s movement (Weller). Such ideas over the role of the female body and a woman’s autonomy over herself remain sensitive and critical debates in parts of the Middle Eastern and far-right ideologies. Through medical records, surveillance, classification, social statistics, and public health statistics, information practices can risk treating the body as a piece of data or object to be managed. When this narrative comes in the context of “improving” or “regulating” populations there is a concerning alignment with how eugenic framing treats bodies as sites of intervention. In the twentieth century and the rise of the computer, gendered demarcations perpetuated embedded beliefs and could be seen in the differing information roles allocated to men and women (Bryant). Such distinctions echo the broader context in which women have, throughout history, recurrently been considered secondary citizens with a lack of sovereignty over their minds and bodies. Historical explorations of gender in information practices of all varieties, where information is handled, ordered, and classified, remain critical to our understanding and shaping of the modern world, not least through the generative but pervasive technology of AI which has been shown to perpetuate existing gender bias (Hall and Ellis, 2023).

German soldiers writing a report to be delivered by homing pigeon, March 1917.
Figure 3: German soldiers writing a report to be delivered by homing pigeon, March 1917. Source: RG 165, NARA.

Circulating, Networking, Controlling

The penultimate section of The Handbook, Part IV, continues links between past and present, showing how information networks and practices of information exchange can highlight questions of access and power. Whether that be through the mutual reinforcement of information and decolonisation in post-colonial Egypt (Leblanc), or the universal ideals of freedom of access to information and freedom of expression in South Africa from the seventeenth century to the modern day (Dick), the circulation and control of information has had great historical potency. Meanwhile, theories of information and communication have been intertwined since Claude Shannon’s work of the mid-twentieth century, but other models, including those from China, Latin America, France, and Russia offer alternative positions and philosophies, highlighting the possibility of a more global history of information and communication theories (Balbi et al.).

Historically, as in the modern day, networks of information reside just as much in social and cultural exchanges as they do in the technological sphere. The early modern city used information as both object and tool of social and political competition, showing that across differing socio-political levels, and across differing cities, from Istanbul in the East to Venice in the West, information was simultaneously both weaponised by governments, and used as an instrument of criticism and challenge by those governed (de Vivo). Challenging existing ideas of information, or ways in which information is communicated across distances, is also evident in the notion of wartime technology and information exchange, where homing pigeons, in use since antiquity, have long assisted the one-way movement of information between individuals in different locations. Even in the era of the digital communication, pigeons have once again become a vital method of information exchange resilient to cyber hacking or infiltration (Blazich, Jr.). Sometimes, analogue is better.

The circulation of information has never been solely political of course. Sixteenth- and seventeenth-century goldsmiths in London used recipes and books of secrets as a form of cultural information exchange. Alongside the broader context of experimental science, craftspeople were shown be major participants in the circulation of information about the natural world, while also enhancing their social standing through such information practices (Kilburn-Toppin). Cultural information networks can also be seen in nineteenth-century Denmark, where street ballads offer insight as to how information about significant events could be circulated, whilst not always reflecting a true or accurate representation of the facts. As commodified information, their value was often emotive and rhetorical (Skouvig). Such information practices invite us to explore the nuances of what has, historically, been considered information worth selling, who bought it, and how it has fluctuated over time. Moreover, in an age of misinformation, it challenges us to ask how much of the information we are sold is truly accurate.

In times of conflict, access to information has been critical in supporting, or challenging, existing power structures. The Black communities of the Spanish Caribbean during the late eighteenth century were exposed to racial and political violence following the spread of information and ideas from revolutionary France (Soriano). Similar tensions continued around the world to the present day, as diverse media – from eighteenth-century pamphlets to twenty-first century podcasts – served to either divide or connect minority communities.

The spread of ideas and the mobility of information, especially among more peripheral groups, also remains highly topical in today’s world, determining why people migrate, where they might move to, and their impressions of community or cultural or intellectual diasporas. Such mobility of information is evident in migrant populations from at least the eighteenth century, and case studies of some groups such as the Roma can offer real insights as to the realities of such movements (Rosenhaft). Even smaller groups, such as the family, can be understood as information communities, with access, dissemination, and even control of the social circulation of information between relatives (Friedrich). This dynamic can be dated from the early modern period through to the present day where family demographics have responded dynamically in response to work, conflict, technology, and shifting social hierarchies. 

The profound technological, economic, and social transformations that took place during the nineteenth and twentieth centuries witnessed the emergence of new corpora of information, information professions, and information infrastructures (Cortada). While the accumulation of information and its management were accentuated by industrialisation, such practices had been evident in commercial and imperial networks for several centuries prior to this. The East India Company offers an insightful case study of how the management of scientific information could be symbiotic with the changing political and economic conditions of British colonialism in Asia (Ratcliff). By the nineteenth century such vast networks of information, owned and controlled by what had become the largest and most powerful corporation in the world, were regarded by some as part of the Company's attempt to maintain an information monopoly, and were criticised accordingly. Other information networks were used for more ideologically motivated purposes. During the second half of the twentieth century the Romanian Securitate, along with other Secret Police in the Eastern Bloc, used information about their targets’ vulnerabilities to entice or coerce them into collaboration (Glajar and Petrescu). Such systems of information manipulation are omnipresent throughout history, but modern information practices which serve to control information and its use have names that are just as recognisable: misinformation, manipulation, bowdlerisation.

The Importance of Information History

While good history avoids presentism, it must also ask questions of the past related to understanding the present. The collection of scholarship within The Handbook as a whole highlights recurring themes, or threads, and the degree to which these threads highlight longer social historical arcs are notable and significant. Some of these have been touched on above, but, of course, more traditional historical themes such as empire, race, and class (or labour relations) are also manifest throughout the book.

Chapters on early modern and colonial empires, such as those on the Dutch East India Company, and Caribbean, suggest information practices as a means for data collection, trade records, and scientific documentation to support imperial extraction. Additionally, other chapters which discuss postcolonial situations (Egypt, Singapore, and South Africa) explore how information infrastructures persist as technologies of governance even after formal decolonisation. Modern information infrastructure, such as censuses, biometric databases, algorithms, and digital platforms, were not created in a vacuum. They descend from imperial and colonial practices of classification, where information collection served to map, control, and manage colonised populations. Modern information practices can therefore be understood as both products of empire, and also mechanisms of imperial continuity in postcolonial governance. 

Other chapters draw out the idea of race and racialisation as epistemic practices, intimating, as a collective, that, historically, race may be produced through information practices. In the colonial press, for example, racialised language made Black and enslaved people visible as property but invisible as individuals. Thomas Jefferson’s data practices expose how even the very category of the epistemology of race was informationally constructed. The Caribbean and South African chapters extend this, showing how race structured communication and knowledge exchange under slavery and apartheid. Historicising the concept of “neutral” information allows us to challenge it, thereby opening a discourse about bias as a structural feature of modern information practices and systems.

Alongside race, class and labour relations also form a recurring and overarching theme of many of The Handbook’s chapters. Here, information practices can be seen as sites of production, tracing the material conditions of information work, from early modern workshop cultures to nineteenth and twentieth clerical and computational labour. The industrialisation of information mirrors broader shifts in capitalism and social stratification that appeared from the nineteenth century onwards. By tracing information practices through labour and class, one might argue that information work (in its broadest sense) has long relied on undervalued, gendered, or racialised labour. This highlights how information practices, the collecting, classifying, storing, and circulating of information, are forms of human work shaped by social hierarchies. Historically, clerks, women “computers,” and colonial subjects often performed the manual and cognitive tasks that sustained archives, bureaucracies, and empires, yet their labour was rendered invisible through claims of neutrality or automation. In a revision of Castells’ vision of informational labour (Castells, 1996), contemporary digital systems often reproduce these dynamics with platform users, data annotators, and content moderators performing this same hidden labour, linking the exploitation of past archival and bureaucratic workers to the information labour of the present (Halcli & Webster, 2000; Altenried, 2020).

One of the biggest impacts of this project is a visceral reminder of the importance and relevance of historical perspective, more vital now than ever. Information history is a critical tool to understand the information themes of the present day: personal privacy, information collection by the state, the power of technology, misinformation, datafication of the human body, the human character, information warfare, and, significantly, the role of information as a tool of liberation. Such themes surround us every day in our contemporary world, but their interconnected precedents are woven throughout The Routledge Handbook of Information History. There is, as shown throughout this collection, an enduring and complex relationship between the ubiquity of information practices and the political, cultural and social dynamics of humanity’s past and present. 


Bibliography

Altenried, Moritz (2020). “The Platform as Factory: Crowdwork and the Hidden Labour behind Artificial Intelligence.” Capital & Class, 44 (2), 145-158.

Blair, Ann et al. (2021), Information: A Historical Companion (Princeton: Princeton University Press.

Brewster, Stephen (2005). The Impact of Haptic ‘Touching’ Technology on Cultural Applications. Routledge.

Castells, Manuel (1996). The Information Age: Economy, Society and Culture, Vol. I, The Rise of the Network Society. Oxford: Blackwell.

Darnton, Robert (2000). “An Early Information Society: News and the Media in Eighteenth-Century Paris.” The American Historical Review, 105, (1), 1–35.

Halcli, Abigail & Webster, Frank (2000). “Inequality and Mobilization in The Information Age.” European Journal of Social Theory, 3 (1), 67-81.

Hall, Paula and Ellis, Debbie (2023). “A Systematic Review of Socio-Technical Gender Bias in AI Algorithms.” Online Information Review, 7 (7), 126-1279.

Headrick, Daniel (2000). When Information Came of Age: Technologies of Knowledge in the Age of Reason and Revolution, 1700-1850. Oxford: Oxford University Press.

Jiang, Chutian;  Kuang, Emily; and Fan, Mingming (2025). “How Can Haptic Feedback Assist People with Blind and Low Vision (BLV): A Systematic Literature Review.” ACM Transactions on Accessible Computing, 18 (1), article no. 2, 1-57.

Stevens, Norman (1986). “The History of Information,” Advances in Librarianship, 14, 1-48.

Weller, Toni; Black, Alistair; Mak, Bonnie; and Skouvig, Laura (Eds.) (2026). The Routledge Handbook of Information History. Routledge.

Handbook Chapter Bibliography

Adler, Melissa (2026). “’There Must be Something Vicious in the Data’: Thomas Jefferson’s Techniques of Racialisation in the Production of DataFacts, and Information.” The Routledge Handbook of Information History, 110-123. Routledge.

Aspray, William (2026). “The History of Computing: The Development of an Information History Field.” The Routledge Handbook of Information History, 353-367. Routledge.

Attar, Karen (2026). “Representing Information in the Western World: Classification, Cataloguing, and the Library Context since Industrialisation.” The Routledge Handbook of Information History, 338-352. Routledge.

Balbi, Gabriele; Negro, Gianluigi; Rikitianskaia, Maria; Scolari, Carlos A.; and Trudel, Dominique (2026). “Information and Communication Theories: A Global History of the (Con)fusion.” The Routledge Handbook of Information History, 536-551. Routledge.

Bedos-Rezak, Brigitte Miriam (2026). “Information and Its Forms: Documentary Practices in the Medieval West (Mid-Ninth to Mid-Thirteenth Centuries).” The Routledge Handbook of Information History, 52-77. Routledge.

Black, Alistair (2026). “Information, Topography, and War: Information Management in Britain’s Inter-Service Topographical Department (ISTD) in the Second World War.” The Routledge Handbook of Information History, 264-279.  Routledge.

Black, Alistair; Mak, Bonnie; Skouvig, Laura; and Weller, Toni (2026). “Situating Information History: The History and Historiography of Information and Its Practices.” The Routledge Handbook of Information History, 3-34. Routledge.

Blazich, Jr., Frank A. (2026). “Feathers and Formats: Information, Technology, and Homing Pigeons in War.” The Routledge Handbook of Information History, 516-535. Routledge.

Bryant, Antony (2026). “‘Men are Engineers, Women Are Computers’: Women and the Information Technology Interregnum.” The Routledge Handbook of Information History, 308-323. Routledge.

Cortada, James W. (2026). “How Information Changed Between the Late Nineteenth Century and World War II.” The Routledge Handbook of Information History, 471-484. Routledge.

da Silva Perez, Natália (2026). “Racialised Language in Colonial Newspaper Advertisements During the Eighteenth and Nineteenth Centuries.” The Routledge Handbook of Information History, 95-109. Routledge.

de Vivo, Filippo (2026). “The Politics of Communication in the Early Modern City: Istanbul and Venice.” The Routledge Handbook of Information History, 385-401. Routledge.

Dick, Archie L. (2026). “Dynamics of the Human Element in South Africa’s Information History.” The Routledge Handbook of Information History, 566-580. Routledge.

Farbøl, Rosanna and Sylvest, Casper (2026). “Sensitive Information: Knowing and Preparing for Nuclear War during the Cold War.” The Routledge Handbook of Information History, 295-307. Routledge. 

Friedrich, Markus, (2026). “Families as Communities of Information. Or: The Importance of Knowing Your Relatives.” The Routledge Handbook of Information History, 501-515. Routledge.

Glajar, Valentina and Petrescu, Corina L. (2026). “Factual Fictions and Fictionalised Facts in the Reports of the Romanian Secret Police.” The Routledge Handbook of Information History, 485-500. Routledge.

Higgs, Edward (2026). “Facial AIs and Information Systems in Historical Context.” The Routledge Handbook of Information History, 187-200. Routledge.

Irving, Henry (2026). “The Wartime Social Survey as Information History.” The Routledge Handbook of Information History, 280-294. Routledge. 

Kilburn-Toppin, Jasmine (2026). “Recipes, Gold, and Information Exchange: Workshop Cultures in the Early Modern Metropolis.” The Routledge Handbook of Information History, 402-413. Routledge.

LeBlanc, Zoe (2026). “Decolonisation and Information in Postcolonial Egypt, 1952–1967.” The Routledge Handbook of Information History, 552-565. Routledge. 

Luyt, Brendan (2026). “The Fault Lines of Knowledge: An Examination of the History of Wikipedia’s “Neutral Point of View” (NPOV) Information Policy and Its Implications for a Polarised World.” The Routledge Handbook of Information History, 173-186. Routledge. 

Mak, Bonnie (2026). “What Is Information History For?,” The Routledge Handbook of Information History, 583-597. Routledge. 

Milillo, Lucrezia and Hyland, Sabine (2026). “The Andean Khipus: An Information System Made of String.” The Routledge Handbook of Information History, 78-94. Routledge.

O’Daly, Irene (2026). “Creativity in Classification: Phrasing and Presenting the Aristotelian Categories in the Middle Ages.” The Routledge Handbook of Information History, 215-232. Routledge.  

Ratcliff, Jessica (2026). “Colonial Political Economies of Information: The East India Company and the Growth of Science in Britain’, The Routledge Handbook of Information History, 414-428. Routledge.

Riggsby, Andrew (2026). “Information in the Roman Empire." The Routledge Handbook of Information History, 37-51. Routledge. 

Robinson, Rebecca (2026). ‘“Those Who Help His Sight and Hearing Are Many’: Information and the State in Early China.” The Routledge Handbook of Information History, 203-214. Routledge. 

Rosenhaft, Eve (2026). “Information and Mobility: Migrants and Roma as Historical Cases.” The Routledge Handbook of Information History, 442-456. Routledge. 

Shepherd, Elizabeth (2026). “Central and Local: A History of Archives in Twentieth Century England.” The Routledge Handbook of Information History, 324-337.  Routledge. 

Simonsen, Maria (2026). “Encyclopaedias as Cultural Carriers of Information: A Scandinavian Perspective.” The Routledge Handbook of Information History, 124-138. Routledge. 

Skouvig, Laura (2026). “Emotions as Commodities: Street Ballads and the Commercialisation of Information.” The Routledge Handbook of Information History, 457-470. Routledge.

Soriano, Cristina (2026). “In-Between Writing and Orality: The Circulation of Information in the Black Spanish Caribbean during the Age of Revolutions, 1789–1808.” The Routledge Handbook of Information History, 4429-441. Routledge. 

Stevens, Hallam and Harjani, Manoj (2026). “Smart Cities and Informatic Governance: The Management of Information and People in Postcolonial Singapore.” The Routledge Handbook of Information History, 368-382. Routledge. 

Szommer, Gabor (2026). “Trading Factories as Information Factories: Aspects of Information Management in the Dutch East India Company’s Japanese Factory, 1609–1623.” The Routledge Handbook of Information History, 233-246. Routledge. 

Usher, Bethany (2026). “Information as Instruction: A Short History of Attack Journalism.” The Routledge Handbook of Information History, 158-172. Routledge. 

van den Heuvel, Charles (2026). “Paul Otlet’s Experiments with Knowledge Organisation and Explorations of a Future Semantic Web.” The Routledge Handbook of Information History, 139-157. Routledge. 

Weller, Toni (2026). “The Female Body as an Object of Information: Britain during the Late Victorian and Edwardian Period.” The Routledge Handbook of Information History, 247-263. Routledge. 

 

Weller, Toni  (December 2025). “Framing the Field: Reflections on The Routledge Handbook of Information History.” Interfaces: Essays and Reviews on Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 81-92.


About the author: Toni Weller is a Visiting Research Fellow at De Montfort University, UK. She has worked in the field of information history for over twenty years and has published extensively on a range of topics including the theory and methodology of information history, women and information, Victorian information culture, and the history of the surveillance state. She sits on several editorial boards and is a former editor of the international journal Library & Information History.


 

FOSDEM_2025_Opening_Talk_from_the_speaker_perspective
Figure 1: Opening Talk at FOSDEM 2025. CC-BY-SA RichiH.

EU Open Source Week 2025 – a series of events held in Brussels from late January to early February – brought together a broad spectrum of voices from across the open-source landscape. Over the course of the week, the program offered a snapshot of Europe’s open-source ecosystem in all its variety. It was, at times, like walking through greenhouses in a botanical garden, each housing plants from different climates, as participants moved between events tailored to open-source activists, developers, business managers, and policymakers.

The European Open Source Awards, organized for the first time as part of the week’s events by Brussels-based think tank OpenForum Europe, were followed by a black-tie gala. One of the organizers explained the reasoning behind this: the aim was to create a different public image of open source, one not dominated by the (clichéd) visual shorthand of hooded hackers, but a sophisticated, high-society cocktail party – moving open source “from the backroom to the ballroom”. The following day brought the EU OS Policy Summit, where business suits were the norm. Then came Free and Open source Software Developers' European Meeting (FOSDEM), the long-running and more activist conference, which brought hoodies and T-shirts firmly back into fashion. The conference was accompanied by OFFDEM events, which protested what some see as the increasing commercialization of even grassroots open-source spaces like FOSDEM.

The variety of events during EU Open Source Week demonstrates how successfully open-source software has expanded into different domains. Over its history – spanning some 50 years, if not more – the discourses around open source have evolved significantly. This essay argues that drawing on the rich history of open source, and the discourses that have formed around it, allows us to frame open source in various ways. In particular, past discourses can help make certain frames resonate more effectively in policy circles. The essay centers on the question of how open source is framed today to promote its adoption in the public sector, focusing on five widely used frames. An overview of the mechanics of framing is followed by a historical sketch of the open-source movement, before delving into the specific frames used to promote open source in public sector contexts.

Guests
Figure 2: Guests (including the author, left) at the black-tie cocktail reception following the European OS Awards Ceremony. Photo courtesy of the European Open Source Academy, https://europeanopensource.academy/.

Framing policy debates

One of the most common definitions of framing was formulated by the scholar of political communication Robert M. Entman in 1993: “To frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described” (Entman, 1993). Framing works by linking content, or a “cue” – such as a piece of news, a concept, or an event – to a broader interpretive frame. Framing thus “allows people to make a graceful normative leap from is to ought” (van Hulst et al., 2025; Schön and Rein, 1994), as different frames point towards different strategies for action.

Through framing – whether deliberate or unintended – certain interests are advanced or marginalised, power structures are reinforced or challenged, and some actors are included in, or excluded from, policy conversations (van Hulst et al., 2025). Framing, then, is not only a matter of interpretation but also a discursive device: a means of shaping action, influencing how decisions are taken and how, eventually, policies take form. Those who are able to define a debate can guide it in a particular direction. By emphasizing certain aspects over others, it is possible to shape the frames that others adopt in their decision-making. When multiple actors seek to shape a policy debate, competing frames often emerge. These so-called “frame contests” involve the construction, promotion, and contestation of frames.

In the case of open source in the public sector, past discourses – understood here as socio-political realities constituted through language in use – are drawn upon to frame open source in different ways and promote its adoption. On the one hand, the wide range of available interpretations reflects the maturity and diversity of the field. On the other hand, as this essay argues, even actors who pursue similar goals – namely, the adoption and promotion of open source in the public sector – tend to frame it differently, which can risk appearing contradictory to external audiences. Moreover, frames are sometimes adopted that are ineffective in a particular context because they do not resonate with the intended audience. Being more attentive to framing, and more deliberate in its use, may help prevent a fragmentation of voices and better align communication strategies around open source in the public sector.

UNIVAC ! ca. early 1950s
Figure 3: UNIVAC 1 ca. early 1950s.

A brief history of free and open-source software

In the mid-twentieth century, early software development largely took place in academic and research settings, where sharing information was the norm. This context shaped understandings of software as something to be shared and adapted, rather than sold as a finished product. Moreover, at the time, the number of computers in use was relatively small, and these machines often had unique hardware and software requirements. As a result, software frequently had to be modified to run on different systems. Operating software and compilers were typically included with hardware purchases at no additional cost, and source code was usually distributed alongside machine code. One of the earliest examples of this model is the UNIVAC A-2 system (an equivalent of today’s compilers), developed in 1953, which was released to customers with its source code and an invitation to share improvements. In the years that followed, similar practices continued, for example, most IBM mainframe software was also distributed with source code included. SHARE Inc., a group founded in 1955 by IBM users, played a key role in facilitating this exchange of source code and collaborative development among early computer users.

However, hardware differences narrowed, and software became more portable across systems, so hardware alone was no longer a reliable source of competitive advantage. At the same time, software development became more professionalized – Microsoft was founded in 1975 – and manufacturers began treating software as a valuable asset. This shift was driven not only by the pursuit of new revenue streams, but also by legal pressures: in 1969, IBM, facing antitrust lawsuits, announced its “unbundling” decision, beginning to price software and services separately from hardware. This trend gained momentum in 1978, when the US Commission on New Technological Uses of Copyrighted Works (CONTU) asserted that “computer programs, to the extent that they embody an author’s original creation, are proper subject matter of copyright” (CONTU, 1978). The CONTU ruling – along with subsequent legal cases such as Apple v. Franklin in 1983, which extended copyright protection to binary code – established that copyright law was applicable to computer programs. This laid the legal groundwork for software licensing and ushered in the proprietary software business model: both hardware vendors and emerging software companies began charging for software licenses as standard practice. Software was increasingly marketed as product, and legal protections were used to assert ownership and control over code.

The rise of proprietary software did not go unchallenged, though, and soon sparked the Free Software counter-movement, led in the 1980s by developer Richard Stallman. In his GNU Manifesto, Stallman wrote: “I consider that the golden rule requires that if I like a program, I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way.” (Stallman, 1985) The free software movement thus emerged as an activist counter-model to “software sellers.” Stallman developed the first copyleft license, which allowed users to freely use and modify the software as long as any modified versions were shared under the same terms.

While proprietary vendors generally found copyleft licenses obnoxious, open-source software offered obvious advantages that were not lost on the business world – such as the free sharing of knowledge, co-development of innovative software solutions, and the ability to profit from work done by people not on the company’s payroll. Increasingly, open source was regarded as worthwhile from a business perspective, too, eventually leading to a rebranding of "free software" as "open source software." In 1998, the Open Source Initiative (OSI) was founded with the goal of making free software more palatable to the business world:

We're trying to pitch our concept to the corporate world now. We have a winning product, but our positioning, in the past, has been awful. The term “free software” has a load of fatal baggage; to a businessperson, it's too redolent of fanaticism and flakiness and strident anti-commercialism (Open Source Initiative, 1998).

The rebranding was a success: projects on SourceForge, an OSS development platform, grew exponentially, and it seemed that, following the dot-com crash, unemployed software engineers flocked to open-source projects (Lemos, 2002). Software researchers Margaret Elliott and Walt Scacchi found that, by the early noughties, software engineers’ motivations for joining open-source projects had diversified. While the initial motivation was an idealistic vision of contributing to free code, open source as a business case and career path became stronger motivators following the OSI rebranding (Elliott & Scacchi, 2008).

Open source in today’s public sector

Open source as freedom, business, and occupation: the open source movement has been shaped by various discourses during its history, offering policymakers a variety of ways to frame and promote the use of OSS in the public sector today. The early 2000s saw a first wave of policy discussions about open source, with governments primarily adopting OSS to reduce procurement costs. At the time, OSS was not widely regarded as technologically advantageous, and legal concerns often hindered its adoption. From around 2015, however, a second wave began, marked by a shift in focus. Rather than focusing solely on cost reduction, governments increasingly valued OSS for its potential to ensure digital sovereignty, and support local software ecosystems (Blind et al., 2021). The changing motivations of software engineers to contribute to OSS, as identified by Elliott and Scacchi, are mirrored in this evolving policy focus.

Today, a range of – sometimes competing – frames are used to promote open source in the public sector, including OSS as public infrastructure, as an instrument for achieving digital sovereignty, as an economic accelerator, as a means for improved security, and as business-as-usual.

OSS as public infrastructure

The public infrastructure frame conceptualizes OSS not as a product, but as part of the foundational digital infrastructure of contemporary societies. From this perspective, publicly funded software development is a form of infrastructure investment – analogous to funding for roads, utilities, or healthcare systems. This framing underpins the “Public Money, Public Code” campaign, run by the Free Software Foundation Europe (FSFE) since 2017, which raises awareness of the public sector’s reliance on proprietary software for essential services. The FSFE initiative argued that public procurement should yield publicly reusable code, reinforcing the notion of software as a digital commons.

Complementing this discursive shift, institutions like Germany’s Sovereign Tech Agency – established in 2022, then as the Sovereign Tech Fund – have operationalized the infrastructure frame by directly funding the maintenance of critical open-source components. The public infrastructure frame emphasizes proactive state involvement and long-term investment in non-market production. In this way, it aligns with the Free Software movement’s emphasis on civic rights: realizing digital rights – such as privacy, access to basic public goods, including software, and democratic participation – depends on building and maintaining open digital infrastructure that serves the common good rather than private interests.

OSS as a means to achieve digital sovereignty

The notion of digital sovereignty has gained attention recently, especially in Europe, driven by the rapid “cloud-ification” of public services, increasing dependence on a small number of dominant – primarily U.S.-based – tech providers, and the resulting risks of vendor lock-in. These concerns are further amplified by growing geopolitical instabilities, which have revealed the vulnerabilities associated with relying on foreign-controlled digital infrastructure.

In response, European governments are seeking greater control over the technologies that underpin critical public functions: investments in domestically governed, open-source public infrastructure are meant to reduce dependency on foreign, proprietary technologies. In this context, digital sovereignty agendas increasingly highlight the strategic value of OSS as a means to ensure technological agency. Bria et al. (2025) emphasize that, with the right policy support, OSS can evolve into a strategic asset that underpins Europe's technological sovereignty, and global competitiveness. Likewise, Germany’s federal Centre for Digital Sovereignty (ZenDiS) was established by the German government to take a leading role in promoting OSS for digital sovereignty (IT-Planungsrat 2021). Other European governments are also investing in OSS, including France – through its Interministerial Digital Directorate (DINUM) (Thévenet et al. 2023) – and the Netherlands, among others.

OSS as economic accelerator

The economic accelerator frame positions OSS not so much as a matter of civic rights or Daseinsvorsorge (essential public services that should be supplied by the state), but as a matter of strategic industrial policy. It links the adoption and support of OSS with goals such as technological competitiveness, innovation capacity – due to free sharing of code – and the development of resilient domestic digital industries.

The EuroStack initiative, which has brought together European technology firms and research institutions, has leaned into this approach. The initiative calls for the establishment of a sovereign European tech stack, and frames open source not merely as a tool for reducing vendor lock-in, but as a cornerstone of industrial renewal. Its advocacy for “Buy European” procurement treats OSS ecosystems as economic assets to be cultivated (Caffarra et al., 2025). As Münßinger et al. (2025) note, this marks a shift from earlier ideals of borderless digital markets in Europe toward greater autonomy and control. Openness remains central but is reframed as “sovereign openness” – a concept that balances transparency with protecting and strengthening Europe’s digital autonomy.

OSS as a means for improved security

An asset of OSS is that it is possible to audit and patch vulnerabilities independently of software vendors – security-critical code can be continuously reviewed by internal security teams or trusted external partners. The transparency of OSS allows vulnerabilities to be identified and fixed more quickly than in closed systems. Of course, this benefit depends on active maintenance. Without an active developer community or professional software vendors maintaining the code, potential security issues may go unnoticed or unresolved.

The adoption of OSS by cybersecurity agencies like Luxembourg’s House of Cybersecurity and France’s ANSSI is meant to underscore and further develop the security benefits of OSS. These agencies not only use OSS but also contribute to its development and long-term sustainability (Linåker & Muto. 2024). Moreover, the German Federal Office for Information Security (BSI) and the Centre for Digital Sovereignty have outlined a strategy to secure open-source software supply chains by establishing quality and security standards, making dependencies transparent, and providing traceable proofs of origin (BSI & ZenDiS, 2025).

OSS as business-as-usual

In the day-to-day operations of public sector IT, it can be both practical and effective to present OSS not as disruptive, but as a routine component of digital infrastructure. Rather than underlining high-level strategic goals, this framing evaluates OSS on similar grounds as proprietary alternatives, such as cost-efficiency, maintainability, security or vendor support. This helps integrate OSS into existing procurement processes without requiring a fundamental shift in institutional mindset. 

Business-as-usual thus normalizes the presence of OSS in public sector IT landscapes without recourse to ideological or strategic narratives. Here, OSS is not so much a civic rights matter nor a geopolitical instrument, but simply one among many viable procurement choices, with benefits and drawbacks. This pragmatism is fueled by the increasing professionalization – and commercialization – of the OSS ecosystem, with even big (proprietary and non-proprietary) vendors contributing to OSS projects and offering enterprise-grade support (Haese & Peukert, 2024). While institutions such as the Open Source Business Alliance (OSBA), and more recently its Dutch counterpart, DOSBA, actively raise awareness of the specific needs of the open-source ecosystem, their work to professionalize OSS has also contributed to framing it as business-as-usual.

Developing coherent public OSS strategies

The various OSS frames are often complementary but also expose tensions. Understanding how they align, and where they might clash, is crucial to developing a coherent OSS strategy. All frames converge on the idea that OSS is an increasingly indispensable part of public digital infrastructures. The security frame, for instance, reinforces both the digital sovereignty and public infrastructure perspectives: public institutions need security auditability, traceability, and independence from opaque vendors to maintain control over critical systems. In turn, this need for control and resilience aligns with the economic accelerator frame, which views investment in OSS as a strategic industrial move. The business-as-usual frame, by framing OSS as a routine procurement choice, lowers barriers to adoption and allows other, more strategic benefits – like improved sovereignty or economic competitiveness – to materialize incrementally through actual OSS deployments.

However, tensions emerge when these frames imply conflicting assumptions about how OSS should be governed or funded. The public infrastructure frame, for example, tends to foreground the chronic underfunding of foundational OSS components and demands sustained public investment. This imperative can clash with the business-as-usual approach, which assumes that OSS comes with sufficient maturity and support to be integrated without extraordinary effort. Treating OSS as just another tool in the procurement toolbox risks ignoring the structural fragility of many core OSS projects, which often depend on a small number of unpaid or underpaid maintainers. Similarly, the economic accelerator frame tends to instrumentalize OSS for industrial policy goals, prioritizing competitiveness. This might marginalize projects that are socially valuable but not market-driven, undercutting the public infrastructure and digital rights logic that initially gave rise to initiatives like “Public Money, Public Code”.

While these frames can be strategically layered to address different stakeholders, they should be used deliberately to ensure that operational decisions (e.g. treating OSS as business-as-usual) do not inadvertently undermine the very conditions that make OSS viable in the first place. A mature OSS strategy in the public sector will need to balance short-term pragmatism with long-term stewardship and investment. Otherwise, treating OSS as business-as-usual may simply guarantee business as usual – including all the usual problems of neglected public infrastructure.


Bibliography

Blind, K., Böhm, M., Grzegorzewska, P., Katz, A., Muto, S., Pätsch, S., & Schubert, T. (2021). The impact of Open Source Software and Hardware on technological independence, competitiveness and innovation in the EU economy. Publications Office of the European Union.

Bria, F., Timmers, P., & Gernone, F. (2025). EuroStack – A European alternative for digital sovereignty. Bertelsmann Stiftung.

Caffarra, C., Gambardella, A., Fermigier, S., Hidalgo, A., Lechelle, Y., Parsons, C., Styma, F., & Yen, A. (2025). Deploying the EuroStack: What’s needed now. EuroStack Initiative.

Elliott, M. S., & Scacchi, W. (2008). “Mobilization of software developers: The free software movement.” Information Technology & People, 21(1), 4–33.

Entman, R. M. (1993). “Framing: Toward clarification of a fractured paradigm.” Journal of Communication, 43(4), 51–58.

Haese J, Peukert C (2024) “Open at the core: Moving from proprietary technology to building a product on open source software.” Management Science. Forthcoming. 

IT-Planungsrat. (2021, April). Organisationskonzept: Zentrum für Digitale Souveränität (Arbeitstitel) – Konzeption (1. Ausbaustufe). IT-Planungsrat.

Lemos, R. (2002, February 18). Dot-com dropouts share open-source love. CNET Newshttps://www.cnet.com/tech/tech-industry/dot-com-dropouts-share-open-source-love/

Linåker, J., & Muto, S. (2024). Software reuse through open source software in the public sector: A qualitative survey on policy and practice. RISE Research Institutes of Sweden.

Münßinger, M., Schröder-Bergen, S., & Glasze, G. (2025). “From Neoliberal Openness to Sovereign Openness: Analysing Imaginaries of European Digital Policies.” Geopolitics, 1-28.

National Commission on New Technological Uses of Copyrighted Works. (1979). Final report (pp. 1, 26–27, 37).

Open Source Initiative. (1999). The case for open source: Hackers' version. Archived at https://web.archive.org/web/19991013094510/http://opensource.org/for-hackers.html

Schön, D. A., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. Basic Books.

Stallman, R. (1985). “The GNU Manifesto.” Dr. Dobb’s Journal of Software Tools, 10(3), 30–34.

Thévenet, A., O’Riordan, C., Karhu, J., Grzegorzewska, P., Cacciaguerra Ranghieri, G., Chiarelli, F., Devenyi, V., Di Giacomo, D., & Zoboli, E. (2023). OSS Country Intelligence Report 23. Interoperable Europe Portal.

Tozzi, C. (2017). For Fun and Profit: A History of the Free and Open Source Software Revolution. MIT Press.

van Hulst, M., Metze, T., Dewulf, A., de Vries, J., van Bommel, S., & van Ostaijen, M. (2025). “Discourse, framing and narrative: Three ways of doing critical, interpretive policy analysis.” Critical Policy Studies, 19(1), 74–96.

 

Beiermann, Lea (October 2025). “From the Backroom to the Ballroom: (Re)framing Open Source for the Public Sector” Interfaces: Essays and Reviews on Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 71-80.


About the author: Lea Beiermann is Partnership Lead at the German Centre for Digital Sovereignty (ZenDiS), where she builds strategic partnerships across the public sector and open-source ecosystem to promote digital sovereignty. With an academic background in STS and the history of technology, Lea brings a deep understanding of the social implications of technological transition, informing her current work at the intersection of software and digital policy.


 

Social media is a powerful tool for adults. However, children were not the focus of its initial user base. The question, “what if I am the only person dying and decaying?” resonates in chronic illness community subgroups; their shared experience of illness likely improves their quality of care. However, children don’t have the maturity to understand how and when to engage with chronic illness content the way adults likely do. For example, adults can consent to viewing disturbing medical photos which might frighten children. In addition, most adolescents are not old enough to remember when reality television was at its peak. As the manufacturing of content for reality TV shows like TLC’s What Not to Wear and MTV’s Teen Mom went down in the 2010s, social media influencing went up. In many ways, social media has become the new reality TV. The issue is that the youngest of social media users don’t always have the cognitive capacities to understand what is manufactured and what is normal. These users don’t have a concept of what reality TV used to be, and what they see online today may be a type of reality manufactured for entertainment. 

Children also haven’t met that many people in the world yet, and most of the people they have met are the same age as them. The internet exposes them to people of all age groups, although they are quite vulnerable. In addition, data agreements on social media platforms are not written in child-friendly language. Children cannot meaningfully consent to privacy and data rules if they are unable to read the form. 

Children are full members of digital society. As such, children deserve age-based equity in exploring the internet. However, they do not have the tools they need to roam without fear. According to Jonathan Haidt’s March 2024 book, The Anxious Generation, social media platforms can powerfully shape a young developing mind. As Haidt writes in the beginning of the book, “preteens are at a particularly vulnerable point in development. As they begin puberty, they are often socially insecure, easily swayed by peer pressure, and easily lured to any activity that seems to offer social validation” (Haidt 4). In adolescence, classmates and peers become the new parents. Youth often ask friends for guidance online instead of adults in their lives. Sometimes these peers are strangers on a forum. Often, these platforms become the official advisory board in a young person’s life. However, an adolescent’s struggle with “looks or books” are not trivial. Difficulty with feeling good about the way that one’s adult face looks or about studying for one’s future is quite serious. Further, features such as “like counts” and “story views” reinforce existing self-esteem issues. Haidt writes, “these companies have rewired childhood and changed human development on an almost unimaginable scale” (Haidt 3). The stakes are serious. There will be very complex world problems to solve in the 2040s and 50s. What will happen to the economy if, in twenty years, the majority of the workforce does not believe in their ability to innovate or to solve complex problems? What happens if members of the workforce cannot use their imaginations or feel good about themselves? Further, what could be more important for the future happiness of a country than the well-being of its current children? 

As performing arts child development specialist Peter Seidman states, "young people who learn how to feel good about themselves and take good care of themselves, embrace the kinds of choices and habits that lead to success" (The Seidman Principle). From the foundation of that principle, it might be tangible to assert that, without intervention, young people growing up today will not achieve the kind of success that previous generations experienced. This is not just in terms of material things or "by the world's standards" of quantitative success. Outward signs of material success have changed rapidly due to shifts in the economy, housing availability, and other factors. Rather, and more importantly, these young people may not achieve a deeper level of personal bests and life satisfaction. 

According to Abraham Maslow’s hierarchy of needs, self-actualization only happens when all other needs are met. It often leads to a more meaningful life, because one becomes their most creative, intelligent and authentic self. Simply put; by reaching self-actualization, young people are living to their fullest potential. Beyond the data on child development, it does seem like upcoming generations, like those who grew up with social media, are not as successful in material or personal ways. These changes will have a massive effect on the hiring potential of our future workforce, and therefore, Canada’s human capital. 

So, for what reason have lawmakers not taken drastic action to investigate or to prevent child digital harm? Those in charge of regulating children’s health hear a resounding call for “more research”. It is true that there is still much to learn about digital adolescence. However, researchers have already found a significant correlation between children’s health and their usage patterns on social media. The call for caution from researchers may actually serve to delay tech laws. I propose the term “debate escrow” to describe a debate in which those who call for criticism are asked to read a vast canon or to create further time-consuming research before resuming the debate. The term comes from key escrow in cryptography, where the keys needed to decrypt encrypted data are held in escrow. This is so that, under certain circumstances, an authorized third party can gain access to those keys. The issue is that “debate escrow” delays the conversation under the face of intellectual thoroughness. When scholars gesture to an expansive canon instead of a finite list of sources to make their point, they make their naysayers busy. In addition, before they gesture to a canon, they often reframe disagreement as a “lack of knowing”. It is as if their critics must not understand enough of what is going on if they take an opposing stance. In sum, if a critic is busy reading hundreds of books or conducting more research, they are silent. That ploy can stall policy conversation from moving forward and buy time. It also excludes everyday members of the public from urgent debates which directly impact them. At what point will we know enough about the correlation between social media and adolescent development to act? From a public health perspective, we may already have enough evidence. Waiting for certainty is not “responsible science” when children’s wellbeing is at stake. If that is what is causing the delay, then it is a failure to fulfill our responsibility to protect the future of our nation. The fact that that line of understanding is not concrete points to these discussions of research as being a type of “debate escrow”. 

However, not every call for caution is a form of “debate escrow”. Science can be slow. Replication of experiments does take time. The replication issues in psychology research show that rigorous verification is important. That kind of caution does advance knowledge in a helpful way. Debate escrow exploits that culture of thoroughness as a stalling tactic. It demands more study not for the sake of truth, but to silence critics. The analysis in this paper draws on various resources to examine how corporate funding structures and vague rhetoric shape the reception of social media debates. The paper engages a secondary analysis of publicly available sources to probe these concepts. These include social media platform calls for proposals and past award winners, including the Google Academic Research Awards website and the Meta Instagram research fund website. 

The analysis also includes public material from past winners of government funding sources. The two sources inspected were the Australian Research Council (ARC) and Canada’s Social Sciences and Humanities Research Council (SSHRC). Specifically, the faculty website, research website, publication, and conference output for these projects. Overall, the paper tracks the relationship between cautionary rhetoric and the lack of conflict-of-interest disclosures in social media scholarship. 

A word on the limitations of the paper. The analysis draws on public-facing material, and the absence of a clear, publicly documented link between specific media projects and corporate funds. I do not have access to informal discussions between companies and researchers. 

Is All Social Media Science the Same?

While the parallels between tobacco research and social media funding rhetoric may be a coincidence, they warrant a moment of thought. In The Cigarette - A Political History Sarah Milov presents an infamous 1969 memo written by the vice-president of marketing at Brown & Williamson which demonstrates the industry’s duplicity:

‘In thinking over what we might do to improve the case for cigarettes, I have looked at the problem somewhat like the marketing of a new brand,’ he wrote. ‘Doubt is our product since it is the best means of competing with the “body of fact” that exists in the mind of the general public.’  Doubt’s greatest virtue was that it stoked public perception that a “controversy” existed around what should have been a dispositive scientific fact: smoking was the leading cause of preventable death in the United States. The industry enlisted scientists, physicians, statisticians, social scientists, and even historians in an expert-­driven campaign to subvert regulation and keep people smoking.’ (Milov 9)

Lucky_Strike_ad_1960
Figure 1: Cigarette ad, 1960 (Wiki Commons).

The effect of this strategy was decades of regulatory delay during which many people continued to smoke under the false impression that the evidence was inconclusive. The lesson is that conflict-of-interest funding can convert appropriate caution into an unreasonable delay. This pattern is one which social media research now risks repeating.  

As Milov writes, the history of tobacco neither began nor ended with cigarettes. However, tobacco companies paid for research which would prevent a drop in sales. Likewise, the history of young people communicating with technology did not begin or end with the recent expansion of social media platforms, yet the discourse has changed. 

Life magazine cover
Figure 2: Teenage girl talks on the phone in 1956 (LIFE).

Children have spoken on the telephone and found ways to talk to their friends on “walkie talkies” for many decades. However, issues arise when studies on children and social media are both funded by corporations, and difficult to replicate. It is important to note in the comparison between tobacco and social media that tobacco is inherently harmful. Social media does have positive elements to it. The comparison between these two situations is not based on their effect on public health, but instead on some of the difficulties in communicating their scientific experiments to the general public. 

In the 2017 interview “Science was never intended to be in the market, but today it’s a commodity”, Andrea Saltelli and Silvio Funtowicz discuss the recent crisis of replicability in science. “Replicability” means that an experiment should produce the same results if repeated. There have been many articles about people who have tried to replicate medical experiments and were disappointed to find how many of them had failed. As social media changes its own landscape so quickly, it is difficult to replicate these experiments. For example, if one were to conduct an experiment on a group of 7-9 year-olds and their engagement on Instagram in 2021, it would be very difficult to repeat the experiment in 2025, because much of Instagram has changed. For example, instead of scrolling being the main feature of the platform, there are now disappearing photos and messages with a different sense of impermanence. 

There is a well-researched and ongoing crisis of science as a commodity in the Western world. For reference, the Saltelli interview discusses the broader trend of how science has moved to the market and is sold at a price. Further, the historian Philip Mirowski has detailed the process of commercializing experiments in the book Science-Mart: Privatizing American Science. When science becomes a supermarket and it’s sold over the counter, the result is that the quality disappears. In 2011, Noble Prize laureate Daniel Kahneman, who wrote Thinking, Fast and Slow, first pointed to the seriousness of trusting the science that came out of these issues because scientists couldn’t replicate experiments to validate them. These problems were especially pronounced in “softer” sciences, such as psychology, since they are much more subject to bias than “harder” sciences, like chemistry or engineering. In addition, the virtue leant to “slow science” as a thorough pursuit does not necessarily follow through to social media, since it is such a dynamic landscape of software updates. This is the crucial difference. Replication issues reveal the inherent limits of the scientific method yet debate escrow changes these limits into rhetorical tactics. It is okay for scholars to acknowledge complexity and still argue for various precautions. However, it is when complexity becomes an excuse to postpone protective action that caution turns into escrow. 

Since social media changes so fast, it is not just difficult to replicate for validity but also complex to study definitively. Some things are too difficult to study quickly because of their dimensions, and the way that they intersect with the world. A chemistry experiment will remain the same regardless of public events, but social media experiments won’t. For example, the happiness of young people on the internet in a given study period may vary greatly depending on the news, what social media features are trending, and what is going on in their personal lives. Being young and online during COVID-19 was a completely different experience than those who were talking to friends before the pandemic. Social media science needs to learn humility and be prepared for when it cannot fully explore every dimension of these issues. There will likely always be more that is unknown about young people and the internet. Unfortunately, more research cannot solve every problem. Instead, those in charge must take action to protect young people with the information that they currently have.

However, calls for more research appear throughout the prominent literature on youth and social media, both from psychologists and from STS authors. In the chapter “The Promise of Digital Wellness to Promote Youth Well-Being and Healthy Communities” of the book Interpersonal Relationships in the Contemporary 21st Century Society the authors plainly state, literally, “more research is needed that explores DW” (Laffier et al.). Although the book came out in 2024, after a decade of research, there is still a call for more. In the 2023 U.S. Surgeon General’s Advisory report on Social Media and Youth Mental Health, the author writes:

More research is needed to fully understand the impact of social media; however, the current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents. At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents. We must acknowledge the growing body of research about potential harms, increase our collective understanding of the risks associated with social media use, and urgently take action to create safe and healthy digital environments that minimize harm and safeguard children’s and adolescents’ mental health and well-being during critical stages of development. (Murthy)

The U.S. Surgeon General references a vast literature of conflicting evidence on social media research, although they are leaning towards agreeing that there is an issue. The fact that it is a lean, and not a clear point in that direction, is because of how much conflicting evidence exists in the scholarly world. However, it’s not abundantly clear if these medical professionals are aware of the conflict-of-interest funding schemes which drive social media research. Instead, it seems as though they synthesize literature as if it is as strongly regulated as medical research. Yet, why shouldn’t they? In the 2024 JMIR Publication, “Social Media Use in Adolescents: Bans, Benefits, and Emotion Regulation Behaviors” McAlister et al. also plainly state, “more research is needed to identify how various platforms, usage patterns, and algorithms specifically impact adolescent mental health” (McAlister et al.). Another more recent publication, the article “How’s Life for Children in the Digital Age,” published in May 2025, also bluntly states, “a key conclusion of this report is the need to better understand how offline factors contribute to or protect against problematic digital media use and its impact on well-being, through additional data and analysis” (Organization for Economic Co-operation and Development). For further reading, look to recent publications in psychology and social media studies, with an eye for rhetoric like “better understanding,” “more research,” “enough evidence,” or search for the term “conflicting.” These repeated deferrals constitute a form of debate escrow, as calls for more work keep naysayers silent. As a result, these cautious statements delay the immediate protective action which children deserve. 

Children on Computer
Figure 3: Children work in the computer lab in 1989 (Wiki Commons).

Recent Social Media Research Opportunity Schemes

There are three common funding avenues for research: gifts, awards, and contracts. Gifts are unrestricted, tax-deductible funds with no deliverable attached. Awards are often merit-based and can overlap with gifts. Both can be direct payments to a university which then funds the researcher. Further, contracts contain deliverables, auditability, formal deadlines, and are subject to Freedom of Information requests. 

In recent years, major tech companies such as Google and Meta have expanded their investment in academic research on digital adolescent engagement. Two prominent awards include Google’s Academic Research Awards (GARA) and Meta’s Instagram Research Awards (MIRA). These corporations have funded multiple projects at prestigious Universities to study the effects of their platforms over the last few years (Google Academic Research Awards). While Google might not seem like a social media stakeholder, it owns YouTube. 

The fact that these companies are outsourcing information is not inherently harmful. It makes sense that tech companies would want to better understand how their product impacts others. Some of the outcomes may be neutral. However, why would they fund external research when they could complete that research in house? Further, why would they fund something that does not benefit them? 

The GARA funding structure presents a list of topics which are eligible for funds in any given cycle. The proposals must align with Google ethics boards and principles. The award is about 100,000 USD. Each year, Google spends over $1 million in “unrestricted gifts” to the Universities which host social media researchers. According to the website, the funds go directly to the University which supports the PI as a gift (Google Academic Research Awards). The public facing term “award” signals social capital, but its method of releasing funding to the University is different. If the University as a whole receives a gift, there is a subtle pressure on other members of the research organization to comply with the gift giving entity. There is also a distinct legal difference between a research contract, and a gift. If it were a formal contract, the funding would be subject to a COI report, Freedom of Information process, IP requests, an external human ethics review board led by a committee of peers, and public disclosure. Gifts can bypass these steps depending on the specific University and their policy guidelines for academics. Gifts are also tougher to trace as a funding string, and they present as a “no strings attached” independent contribution. They can also be a charitable tax deduction. More specifically, the funding often goes towards salaries for research assistants, or even to compensate interviewees for their time. Technically, it can kind of go towards anything related to the research. However, only certain Universities will allow gifts to top-up the salary of the PI. It depends on the governing body of the specific University.

On a legal level, if a funding string is not attached to a contract, there is no requirement for a deliverable. Yet, researchers must propose a deliverable to win. In an academic contract, there are deadlines for deliverables attached to funding. If there is no contract or request for a deliverable, there is no direct trace to Google in later publications or reports. The Terms page of the GARA website provides more information on contracts and deliverables. For further information, see the subpoint “NOT AN OFFER OR CONTRACT” (Terms for Google Academic Research Awards).

Beyond the legal differences, it seems right to pause on the linguistic textural difference between the terms “award” and “gift”. An award signals that the designation is for work already done. In a way, Google is applauding some of the most intelligent and skilled social media researchers in the world. A “gift” implies generosity and nothing in return, for those Google deems extremely accomplished. However, $100,000 is not a small gift. These conflicts compromise a researcher’s objectivity, even if the influence is subconscious, which in turn changes the quality of the science they conduct. Especially without a clear COI disclosure, these gifts create a precedent in social media research which warrants attention. Those in the medical field and beyond may not be as aware of these funding schemes, because they do not see the calls for proposals. 

Further, Meta’s Instagram Research Award funding shares some similarities, although the average amount is 50,000 USD. Each year they spend an estimated $300,000 on gifts for about six researchers (Meta Instagram Research Awards). MIRA seems to have a much more public facing attitude, as it encourages awardees to publish insight from their research. Does that mean that some of the research done only gets distributed back to Meta? According to the Meta website, their funding scheme also provides the host University with an “unrestricted gift” (Meta Instagram Research Awards). As a result, all of the legal red tape which would apply to GARA also applies to MIRA. 

In contrast, SSHRC and ARC have a rigid set of rules which its recipients must follow. Those contracts are a formal agreement, with a selection process by arms-length academics. In contrast, Google and Meta do not disclose who sits on the decision-making committee. SSHRC has required deliverables and mandatory disclosure rules for by-product publications. SSHRC contracts, since they are contracts and not awards, are subject to audit and transparency laws. In addition, individual researchers chose a topic within the scope of the award. SSHRC is frequently brought up on faculty websites as well as conference proceedings and CVs. Also, academics are quick to mention that they are SSHRC funded. If the research involves human subjects, it must be subject to an ethics review board. Corporate funded projects are not subject to those standards. Although most technical research does not involve humans, when the human mind is involved, like in social media research, there should be a human-subject ethics review board. 

The researchers who apply for GARA and MIRA are likely to be mostly neutral actors. They often have an interest in something which the award funds, and perhaps an excitement in the prospect of further investigation. The issue with the GARA/MIRA scheme is that it contributes to a broader, more troubling pattern within the STS community which seems to be both new and escalating quickly. GARA/MIRA have only appeared in the last few years as a potential source of COI. 

As Philip Mirowski argues in Science-Mart, corporations have long used scientific credibility to launder their interests. As mentioned before, although company websites may list award recipients, the resulting publications often omit clear acknowledgements of corporate funding. That absence is not trivial. As Joel Lexchin explains in the Sci Eng ethics paper “Those Who Have The Gold Make The Evidence: How The Pharmaceutical Industry Biases The Outcomes Of Clinical Trials Of Medications”, “not only does the conduct of these trials lead to misleading information but they are probably unethical in so far as they have the potential to expose patients to harm or to prolong suffering” (251). Further, as Sergio Sismondo points out in “How Pharmaceutical Industry Funding Affects Trial Outcomes: Causal Structures and Responses”, COI perhaps does not operate on a conscious level, but the act of accepting funding from a company creates a gift relationship (Sismondo 1910). The person receiving the “gift” feels an obligation to repay the present in some manner. In “All Gifts Large and Small: Toward an Understanding of the Ethics of Pharmaceutical Industry Gift‑Giving”, Dana Katz writes, “when a gift or gesture of any size is bestowed, it imposes on the recipient a sense of indebtedness. The obligation to directly reciprocate, whether or not the recipient is conscious of it, tends to influence behavior’’ (Katz). It seems as though the researchers who are trying to understand how social media may sway the minds of young children are perhaps themselves also being mentally swayed by the same sources. By disclosing conflict-of-interest financial ties, those who are part of these discussions will have a better understanding of the potential biases involved. 

When omitted, it becomes difficult for policymakers and “the people” to fully grasp the context of the experiment. Such silence contributes to what some scholars describe as a “regulatory double bind,” wherein policy change is perpetually postponed until supposedly definitive evidence arrives. However, that evidence is shaped by the very industries under scrutiny. 

That said, it makes sense that these companies would care the most about how their products impact people. A lot of the research that they do is for society’s benefit. As stated before, another issue arises when there is no clear ethical review board that is either publicly available or adequately equipped to address healthcare rather than just technology.

Commonwealth Policy Action

Publicly supported studies, like ARC and SSHRC, have led to stronger policy action. These research grants also have much stronger independent ethics review boards. In 2018, Monash University received 420,152 AUD to study the impact of social media on the employment prospects of youth (Grant No. DE190100858). It was part of the Discovery Early Career Researcher Award. To date, primary investigators from RMIT University, the University of Sydney, and the University of Melbourne, have received 495,510 AUD to conduct research on Addressing Online Hostility in Australian Digital Cultures (Grant No. DP230100870). 

The impact of these research funding schemes is not abstract, but tangible. On November 28, 2024, the Australian Government passed a new law called the Online Safety Amendment (Social Media Minimum Age) Bill. The law introduces a minimum age of 16 for accounts on certain social media platforms. It will take effect by December 2025, and parents cannot give their consent to let under-16s use these platforms. In July 2025, the Australian Government expanded the ban to include YouTube (Google’s sub-company). Those in charge in Australia firmly believe that the risks outweigh the positives, and that there is enough research to say that definitively. The ban will include ​​X, Facebook, Instagram, TikTok, Snapchat and Reddit. The tech companies that own these platforms could face fines of up to $50 million if they don’t take reasonable steps to stop under-16s from having accounts (UNICEF Australia). While it may be a coincidence that most of the social media research in Australia is publicly funded, and that they are quicker to regulate these companies, the differences in funding schemes warrant further, urgent, investigation. This example also illustrates the distinction between appropriate caution and unreasonable delay. In Australia, policymakers judged that enough evidence already existed. By contrast, in the US and Canada, industry-funded reached has slowed comparable government action. The comparison between these two digitally and culturally similar countries suggests how funding structures shape not just science, but also policy timelines. 

Biomedical Urgency

Researchers in medicine often communicate with greater urgency about the potential dangers that social media poses. Jean Twenge’s notable 2018 study “Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study” found a link between increased screen time and rising rates of depression among U.S. teens (Twenge et al. 1). However, the article started by explicitly acknowledging that “previous research on associations between screen time and psychological well-being among children and adolescents has been conflicting” (Twenge et al.1). The broad sweeping 2024 summary paper “Association between problematic social networking use and anxiety symptoms: a systematic review and meta-analysis” stated that “existing research findings on the extent of this association [between problematic social networking use and anxiety] vary widely, leading to a lack of consensus.” As a result, the authors conducted a meta-analysis of the literature. That study showed a moderately positive association between social media and anxiety (Du et al. 1). 

One reason for this clarity is that biomedical researchers frequently collect their own data. For example, they utilize the industry-standard Patient Health Questionnaire list. In contrast, many social media scholars rely on platform-provided data. That difference allows biomedical researchers to push for policy change with fewer red-tape issues. Doctors also frequently prescribe a solution, like less screen time before bed. In contrast, social media scholars frequently discuss the complexity of these issues.

To restore institutional integrity, STS journals and conferences should reinforce stronger, more specific, standardized conflict-of-interest (COI) disclosure protocols. Industry funded STS research on the effects of tech on public health is both relatively new and accelerating rapidly. Authors should be required to disclose all COI relationships, for example, grants, industry fellowships, and data agreements. The ICMJE disclosure form provides a strong precedent. For example, it asks researchers to state whether or not they have any relevant COI relationships within the past 36 months. In STS, such disclosures should be visible at the beginning of a published article. The frontmatter should contain gifts from social media companies, the source of data, and how it was accessed. Conference managers should further normalize live disclosures, for instance, by requesting that presenters summarize their potential COIs during opening remarks, perhaps in a neutral sounding “I would like to thank…” format. It is important to note that these community guidelines were not developed by individual scholars themselves. However, these measures would not only increase transparency but improve public trust in a rapidly changing research funding landscape.

Conclusion

The rhetoric of “more research” seems neutral, but it muddies the water of social media debates. However, behind these debates are the fragile minds of real children. If there is truly nothing to hide, there should be no resistance to transparent COI disclosures moving forward. Moreover, while this paper has focused on social media, a similar rhetorical pattern is starting to emerge over the last few months from research on the effects of AI tools on cognitive development. These generative models will begin to shape more of how young minds learn about the world, and in turn, how they develop. With proper guidelines, technology will help young people to develop healthier, happier brains than they would have otherwise. Not the other way around. 


Bibliography

Australian Research Council (February 26, 2023). Addressing Online Hostility in Australian Digital Cultures. Australian Research Council, RMIT University. ARC Grants Data Portal (Grant No. DP230100870).

Australian Research Council (May 31, 2019). The Impact of Social Media on the Employment Prospects of Young Australians (Grant No. DE190100858). Discovery Early Career Researcher Award (DECRA). Australian Research Council, Monash University. ARC Grants Data Portal (Grant No. DE190100858).

Australian Research Council (October 2, 2022). “Identifying and Handling a Conflict of interest in NCGP Processes.” Australian Research Council, Accessed July 30, 2025. https://www.arc.gov.au/about-arc/program-policies/conflict-interest-and-confidentiality-policy/identifying-and-handling-conflict-interest-ncgp-processes.

Australian Research Council (July 1, 2025).  “ARC Conflict of interest and Confidentiality Policy.” Australian Research Council, Accessed July 30, 2025. https://www.arc.gov.au/about-arc/program-policies/conflict-interest-and-confidentiality-policy

Center for Open Science and Meta (July 17, 2024). “Meta and Center for Open Science Open Request for Proposals for Research on Social Media and Youth Well-being Using Instagram Data.” Center for Open Science, Accessed July 30, 2025. https://www.cos.io/about/news/meta-and-cos-open-request-for-proposals

Du, Jianping, et al (2024). “Association between Problematic Social Networking Use and Anxiety Symptoms: A Systematic Review and Meta‑Analysis.” BMC Psychiatry, vol. 24, no. 1, Accessed July 30, 2025. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11089718/

Google (October 1, 2024). “Google Academic Research Award Program Recipients.” Google Research, Accessed July 30, 2025https://research.google/programs-and-events/google-academic-research-awards/google-academic-research-award-program-recipients/.

Google (July 1, 2025). “Terms for Google Academic Research Awards.” Google Research, Accessed 30 July 2025. https://research.google/programs-and-events/google-academic-research-awards/terms-for-google-academic-research-awards/

Haidt, Jonathan (2024). The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. Penguin Press.

Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

Katz, Dana, Arthur L. Caplan, and Jonathan F. Merz (2003). “All Gifts Large and Small: Toward an Understanding of the Ethics of Pharmaceutical Industry Gift‑Giving.” The American Journal of Bioethics, vol. 3, no. 3, pp. 39–46. https://doi.org/10.1162/15265160360706552.

Laffier, Jennifer, et al (2024). “The Promise of Digital Wellness to Promote Youth Well‑Being and Healthy Communities.” Interpersonal Relationships in the Contemporary 21st Century Society, edited by Marta Kós‑Dienes and Beáta Birkás, IntechOpen, Accessed July 30, 2025. https://www.intechopen.com/chapters/1205975

Lexchin, Joel (2012). “Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications.” Science and Engineering Ethics, vol. 18, no. 2, pp. 247–61. https://doi.org/10.1007/s11948-011-9265-3.

McAlister, Amy R., et al (2024). “Social Media Use in Adolescents: Bans, Benefits, and Emotion Regulation Behaviors.” JMIR Mental Health, vol. 11, no. 1, e64626, Accessed July 30, 2025. https://mental.jmir.org/2024/1/e64626

McLeod, Saul (August 3, 2025). “Maslow’s Hierarchy Of Needs.” Simply Psychologyhttps://www.simplypsychology.org/maslow.html. Accessed August 6, 2025. 

Meta (December 1,. 2021). “Announcing the Recipients of Instagram Research Awards on Safety and Community Health.” Meta Research Blog, Accessed 30 July 2025. https://research.facebook.com/blog/2021/12/announcing-the-recipients-of-instagram-research-awards-on-safety-and-community-health/.

Meta (May 18, 2022). “Instagram Request for Proposals for Well‑Being and Safety Research.” Meta Research, Accessed July 30, 2025. https://research.facebook.com/research-awards/instagram-request-for-proposals-for-well-being-and-safety-research/.

Meta (May 16, 2022).  “Instagram research awards for Social Technologies.” Meta Research, Accessed July 30, 2025 https://research.facebook.com/research-awards/instagram-research-awards-for-social-technologies/.

Meta (December 17, 2021).  “Research Awards.” Meta Research, Accessed July 30, 2025https://research.facebook.com/research-awards/

Milov, Sarah (2019). The Cigarette: A Political History. Harvard University Press.

OCED (May 2025). How’s Life for Children in the Digital Age?, OECD Publishing. https://doi.org/10.1787/0854b900-en.

Rueda, Manuel, and Astrid Suarez (30 Apr. 2025). “Rebels in Colombia Are Recruiting Youth on Social Media. The UN Wants TikTok and Facebook to Do More.” AP Newshttps://apnews.com/article/colombia-social-media-rebels-united-nations-6b2a8f1577709c35d5388bcb767a6fc3

Saltelli, Andrea, and Silvio Funtowicz (June 2017). “Science Was Never Intended to Be in the Market, but Today It’s a Commodity.” Observatorio Social de la Fundación La Caixa. https://elobservatoriosocial.fundacionlacaixa.org/en/-/la-ciencia-nunca-se-penso-para-el-mercado-pero-hoy-es-una-mercancia

Seidman, Peter (April 19, 2024).  “The Seidman Principle.” The Seidman Principlehttp://theseidmanprinciple.net. Accessed August 4, 2025.

Selva, Joaquin (May 5, 2017). What Is Self-Actualization? Meaning, Theory + Examples. Positive Psychology: Self Esteem. https://positivepsychology.com/self-actualization/

Siegel, Donald S. (2016). Science Mart: Privatizing American Science. Harvard University Press.

Sismondo, Sergio (2008). “How Pharmaceutical Industry Funding Affects Trial Outcomes: Causal Structures and Responses.” Social Science & Medicine, vol. 66, pp. 1909–14. https://doi.org/10.1016/j.socscimed.2008.01.010.

Social Sciences and Humanities Research Council of Canada (December 20, 2016). “Conflict of interest and Confidentiality.” SSHRC, Accessed July 30, 2025. https://science.gc.ca/site/science/en/interagency-research-funding/policies-and-guidelines/conflict-interest-and-confidentiality?OpenDocument

Twenge, Jean M., et al (2018). “Associations between Screen Time and Lower Psychological Well‑Being among Children and Adolescents: Evidence from a Population‑Based Study.” Preventive Medicine Reports, vol. 12, pp. 271–83. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6214874/. Accessed July 30, 2025.

UNICEF Australia (March 24, 2025). “Social Media Ban Explained.” UNICEF Australia, Accessed July 30, 2025https://www.unicef.org.au/unicef-youth/staying-safe-online/social-media-ban-explainer

United States, Office of the Surgeon General (2023). Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. U.S. Department of Health and Human Services, Accessed July 30, 2025. https://www.ncbi.nlm.nih.gov/books/NBK594759/

 

Latta, Hope (September 2025). “People-in-Progress: “Debate Escrow," and the Language of Delay in Tech Research on Adolescent Digital Health” Interfaces: Essays and Reviews on Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 54-70.


About the author: Hope Latta is a MA student in the History of Science program at the University of Toronto (currently funded by SSHRC). She holds a Bachelor of Computer Science from Acadia University with a focus in machine learning. In 2024, Hope completed a master’s degree at Harvard University in Creative Writing and Literature. As a Zillennial social media manager, Hope ran a professional account with over 10K followers for a division of the Harvard Alumni Association. As part of her digital storytelling project, she grew a podcast viewership by 10K+ listeners and assisted in a PR campaign which reached an estimated 224.4M people. In 2021, Hope also put together a social media storytelling campaign on scientific issues for the United Nations Association of Greater Boston. 


 


In November 1949, the New Zealand economist Bill Phillips presented the first version of his Monetary National Income Analog Computer (MONIAC) at the London School of Economics (LSE) (Corkhill, 2009). Part-teaching aid, part argument for the central role of economic management in public policy, Phillips’ computer – sometimes called the Phillips Machine - used the flow of water to represent the flow of money in an economy. The computer combined electrical probes, floats and cable wires to control the flow of coloured water between a series of tanks representing different ways of spending money in an economy and the ways in which changes to one part of the system affected the whole, in terms of prices, taxes, interest rates and distributions of money.

The computer was primarily used as a teaching tool at places like the LSE and the University of Leeds, where ‘the destabilising consequences of ill-considered policy intervention’ could be clearly shown to students (Barr, 2000). The Phillips machine aimed to be a mechanical model, utilising the work of Kenneth Boulding ‘to show how the production flow, consumption flow, stocks and price of a commodity may react on one another’ (Phillips, 1950). These ideas came out of growing appreciation of the work of John Maynard Keynes and the associated political ideas that economies could (and should) be managed, not left to self-regulating markets and the supposed behaviours of self-interested rational individuals.

The machine gained a wider audience in the early 1950s, with Punch magazine praising its ability to communicate the impact of economic decision making, juxtaposing the clarity of the machine against the intelligence of the man in the street and reticence of decision makers:

The machine is taller than the man in the street, and wider and heavier and much, much cleverer. It is also less reticent about its inner feelings, which are…exposed in the frankest manner – a complex pattern of transparent tubes, of plungers, sluices, checks, balances, buttons, levers and pulleys, all combining to present an instantaneous picture of the nation’s economy…In our view there should be installation in every town hall (or recreation ground, railway station or dog-track) in Britain.’ (Barr, 2000)

Philips and the MONIAC
Figure 1: Professor A.W.H (Bill) Phillips with Phillip's Machine, Wikimedia Commons.

The economic principles the MONIAC demonstrated have been subjected to several criticisms. From the perspective of us living in highly financialised and globalised capital systems, the limited capacity to show the effects of international capital flows in and out of the economic model is an obvious omission. However, as it was built in a time of strict capital controls, such criticisms perhaps reveal more about the current priorities built into our economic system than it does about the limits of Phillips’ design.

The energy basis powering Phillips’ computer has also faced criticism. The economist Kate Raworth has noted that the MONIAC both relies on and makes invisible the energy forms that power the gates and flows of water. As such, Raworth has argued that the computer performs for a certain kind of ideology that is based on energy extraction but at the same time does not take that natural basis into sufficient account in its model (Raworth, 2017).

While criticisms of Phillips’ computer are certainly valid, what follows in this essay is an attempt to show how in its use of computing power to create the impression of a manageable economy through water flow, the MONIAC represents important, but typically marginalised, trends in the history of computation and economic management that occurred from the late eighteenth century. This short essay will try to show how the management of economies and control of water became linked through the emergence of British/European political economy and technological projects through the concepts of routine and predictability and through practices of measuring and recording. Importantly, this was not something distinct to the mid-twentieth century as commonly supposed. Indeed, routines and predictability became technologized in the late eighteenth and early nineteenth centuries. This was done through major construction projects like the development of the artificial canal system, as well as through the use of timekeeping and computational regimes for managing oceanic voyages.

The political economy of routines: locks, stocks and gaugers

In his 1776 Wealth of Nations, Adam Smith argued that without a monopoly, the only commercial activities that could promise security of returns for holders of joint stocks were in banking, insurance, water supply projects and the construction of canals:

The only trades which it seems possible for a joint-stock company to carry on successfully, without an exclusive privilege, are those, of which all the operations are capable of being reduced to what is called a routine, or to such a uniformity of method as admits of little or no variation. Of this kind is, first, the banking trade; secondly, the trade of insurance from fire and from sea risk, and capture in time of war; thirdly, the trade of making and maintaining a navigable cut or canal; and, fourthly, the similar trade of bringing water for the supply of a great city. (Smith, p. 316)

Canal construction, maintenance and management for Smith were safe bets for speculators because they could be insulated from the external or internal shocks that made other trades so risky and could therefore produce predictable returns on investment. In this way, Smith was helping to forge what became a common connection between predictable flows of water and predictable flows of capital. Smith’s concern with routine and predictability was of one with radicals of his time. As the historian Ken Alder has pointed out, the work of engineers and political economists of the late eighteenth century was concerned with what he terms the ‘uniformity project’: standardising measurements, creating technical blueprints, and designing systems of manufacture that were both interchangeable and more universally deployable without recourse to customary and embodied forms of craft skill and organisation (Alder, 1998). Such radical projects came up against forms of resistance, notably from those artisans, whose mode of manufacture relied on forms of design and regulation that resisted the claims and processes of abstraction and uniformity that Smith and his allies valued. But for Smith, canals, like insurance and banking, were trades that were dependent on capital infrastructures rather than the skill and labour of artisans.

To make the connection between economic flows and hydraulic flows uniform and routinised in late-eighteenth century Britain, a vast infrastructure of artificial hydraulic works was constructed. In the domestic sphere, this took the form of an extensive network of artificial canals, locks and dry docks. In the oceanic sphere, this took the form of devising mechanical and astronomical methods for finding and recording longitude at sea. One aim for this hydraulic infrastructure was to convert the less-than predictable rivers and tides to be at least moderately predictable, upon which investment and management decisions may be made and upon which the flows of cargos could circulate. Yet, the introduction of new infrastructural fixes for making flow predictable often relied on very social fixes to make them work.

Between 1758 and 1810, over 4,500 miles of canals were dug in England, Wales and Scotland (Burton, 2015). This was in addition to projects reconfiguring existing rivers to make them ‘navigable’. The combination produced an extensive network of interlocking waterways, stretching through vast areas of countryside, and connecting mines, warehouses and factories with cities, towns and coasts. A vital feature throughout the canal system was the ‘lock’. Locks served as deliberate interruptions in the system, whereby the flow of water was managed through the deliberate opening and closing of gates to raise or lower barges into static pools as a way of overcoming differences in gradient. 

Rochdale_Canal_Lock^91,_Deansgate
Figure 2: Rochdale Canal Lock 91, Deansgate, Wikimedia Commons.

Lock gates and pools were deployed in order to ensure that barges carrying raw materials (most importantly coal) and manufactured goods could maintain a steady and predictable speed along the canal, rather than uncontrollably accelerating or decelerating according to the varying gradients of the land through which they passed. Locks were used therefore to prevent damage to the cargo as well as ensuring a more uniform distribution of barges along the waterway.

The introduction of lock gates and pools allowed for flows of water to be subservient to forms of human control. But these sites of water flow management also had literal downstream effects in which different interests along the canal and riverways came into conflict over claims of losses and gains of water caused by the locks. This was particularly the case for those canals connected to pre-existing riverways, upon which owners of mills were dependent on water flow and power for driving their waterwheels, which in turn powered their machinery. However, out of the conflicts created by stopping flows of water according to the needs of cargo, new ways of recording and calibrating ‘stocks’ of water were introduced.

The tensions and solutions that were brought into with canal locks is well illustrated by the effects of the Grand Junction Canal which was built between London and Northamptonshire between 1793 and 1805. As was the case with a huge number of canal projects, millers and landowners came into conflict with the canal company over the amount of water being held up in the lock system and the amount of water available for use for powering waterwheels. However, a contemporary observer described the ways in which the canal owners – in this case the Grand Junction Company - and those landlords and mill owners - in this case the Duke of Northumberland - established ways to overcome the tension: 

[T]here are two cottages on the banks of the canal, which are called the ‘Water-gauge houses’, one of which is inhabited by a servant of the Grand Junction Company, the other is occupied by a person placed there by the Duke of Northumberland.

Both of these persons keep an accurate account of the height of water at all hours of the day, and also at the times of the different boats passing the locks[.]

… Thus, if five hundred locks of water are given for the use of the millers, to whom the Duke is trustee, they (the Grand Junction Company) in return receive an equal quantity below; at Cowley lock, near Uxbridge, the Grand Junction have a servant to keep a check of water against the Duke’s agent at that place: by this means…all is avoided, and the millers on the stream know to a certainty what quantity of water litigation they are to depend upon. (Hassell, 1819)

Two key features of the lock system became of central importance. First, the use of the lock pools to provide a form of measurement – the ‘lock’ - to determine the amount of water gained or owed to the canal company or the mill owners. By temporarily fixing the flow of water in this way, water became a form of stock, momentarily measuring the differences between flow loss and flow gain between the various kinds of economic interests along canal and riverways. 

The second salient feature was the canal system’s reliance on a special kind of worker: the gauger. These workers, who were typically also employed as lockkeepers, were tasked not only with measuring and recording the amounts of water before and after the lock, but also with recording the number of barges passing through their position. This kind of measuring and recording activity became increasingly common towards the end of the late eighteenth century, and has typically been ignored by historians of economics, privileging instead the role of technologies in determining economic and scientific changes (e.g. Mokyr, 1990). Yet, without the water gaugers, the competing interests in the canal system could not calibrate their claims to be owed water. Indeed, it could be said that in order to make locks appear like a suitable technical solution of managing water and stock flows, humans had to start acting like regular, predictable, measuring machines. Contrary to the general claim that canal construction and operation was predictable based on its technological form, the geographical and social issues involved in canal construction meant that water flow and stocks could only be made predictable with a ritual of human water-gauging along its many locks. 

Gauging and lock-keeping tasks were central to the operation of canals and at the same time the roles were poorly defined in terms of contract and payment. Most employees were employed directly by committees in charge of the operations of specific canal companies, though mill-owners and local gentry did hire gaugers, particularly during moments of distrust or discord between themselves and the canal companies. Payment varied considerably between canal companies, but most lock-keepers earned a fairly small weekly payment, usually supplemented with access to a canal cottage and the rights to part or all of tolls collected from vessels passing through the lock. Certain lock-workers were entrusted by committees to take charge of hiring labourers for working the quarries and warehouses that gradually became part of the holdings of canal companies as their operations expanded.

Although most lock-workers were men, evidence suggests that the range of tasks demanded of them required regular support and assistance. Families of lock-workers lived in the cottages owned by canal companies and family members or servants were frequently relied upon to help undertake the tasks of routine surveillance and recording. Sometimes the work of family members was made official, as the minute books of the Committee of Proprietors of the Stroudwater Navigation recorded in February 1787:

John Shire to be lock keeper at Dudbridge at 10s a week and to work at any employ in Navigation. In his absence, wife to take care of lock when vessels are passing. 1s per week of pay to remain in Company's hands until it amounts to 40s; to be repaid if he leaves Company's service on giving up house without  trouble. (Committee of Proprietors of the Stroudwater Navigation Minute Book Entry, Thursday 1 February 1787, Gloucestershire Archives D1180/1/2)

The unofficial duties of lock-workers encompassed more than surveillance and gauging. Companies relied on lockkeepers/gauagers knowledge of local water-systems and economic activities to provide not just evidence of but also solutions to the regular disputes between different claimants for waterpower. Such information gathering demanded knowledge of situations of water-levels and productive performances of mills and ponds along connected waterways (e.g. Stroudwater Navigation Minutes Mon 12 Oct 1795). This in turn required a degree of socialisation and information gathering in affairs away from the lock, such that apparently locatable and individually payable tasks of lock workers in fact required a high degree of collective, collaborative and largely unpaid labour. 

Adam Smith argued that canals were safe bets for joint-stock investment because they were reducible to little or no variations. This perspective deliberately elided the introduction of new kinds of social conflicts and routines that were developed to ensure they could be seen to perform with a degree of uniformity. In the quest to make the relationship between the flow of goods appear to be the same as the flow of water, we see that a new group of technicians and technologies - in this case gaugers and locks – had to do considerable work. Central to this kind of work was the production of data: of water depths, of passing barges, of quantities of water owed or gained. In this way, Smith was assuming what other political economists and enlightenment philosophers made explicit in what they saw as the advantages of industrial forms of production: the opportunity to make humans follow routines and produce information (Sewell, 1980). This mattered for eighteenth century reformers who saw the opaque and embodied forms of artisanal knowledge as one of the key obstacles for economic growth and rational social organisation. In this way, the success of the canal system in appearing predictable, unvarying and routine-based was based upon its ability to accommodate the practices of human-based measurement and recording, laying the ground for what one historian has summarised as the key project for Charles Babbage and his allies in their advocation for mechanical computation: ‘the mechanisation of intellectual labour.’ (Ashworth, 1994).

Computers, longitude and the ‘calculating eye’

The late eighteenth century project to make the flow of rivers and the flow of goods appear to be reducible to each other was not the only water-based project of its kind. Of all the uniformity projects undertaken by European states in the late eighteenth and early nineteenth centuries, perhaps one of the most ambitious was the attempt to make maritime trade and oceanic trade reducible to routine or uniformity of method. This found its expression in European maritime empires in the establishment of institutions for finding a solution for measuring and recording longitude at sea (Baker et al, 2025; Pimentel, 2015).

Popular interpretations of the quest for longitude have suggested that its purpose was to prevent shipwrecks (Sobel, 1995). However, it has been shown that the purpose of establishing a means to find and record longitude was much more about reducing the risk of ships to miss important trade winds and to help provide a record of a ship’s route over time that could be retrospectively inspected upon a ship’s return. Trade winds mattered because when European ships left the Atlantic via the Cape of Good Hope, they were in danger of being dragged easterly towards the South Pacific; knowing when to turn north mattered hugely so a ship could be captured by trade winds and pulled towards the Indian Ocean (Cook, 1985). The same danger existed upon return journeys back to the Atlantic. The primary danger for merchants and governments in this imperial network was ships being late: delays of months and years had huge impacts on those waiting for goods, profits and instruments of trade and war (Cook, 1985; Phillips, 2021). To make sense of where a ship’s crew claimed to have been, a solution for finding longitude went hand in hand with the production of logbooks, so that officers on ships might make regular entries such that when a ship returned, its route and timings could be retrospectively inspected and appraised (Phillips, 2020). 

These practices were about making oceanic travel more reliable, predictable, and accountable. Towards the end of the eighteenth century, two methods for finding longitude at sea had been developed and trialled on state-sponsored voyages of exploration. One was finding longitude via a mechanical timekeeper (what came to be called a chronometer), where the time given by the timekeeper indicated the time at the place of departure, which could be compared against the local time on the ship, the difference between which gave the longitudinal reading (where every 15 minutes of difference was equal to one degree of longitude). The second method was based on astronomical observations used alongside a publication called the Nautical Almanac.

Although largely invisible in many accounts of British imperial and maritime history, the introduction of timekeepers and the Nautical Almanac depended upon the work of a group of Greenwich-trained personnel, who had been employed as computers and calculators as part of the system of astronomical production for making the Nautical Almanac, as well as testing the rates of mechanical timekeepers bough by the Admiralty (Croarken, 2009; Phillips, 2018). These computers had worked directly for the Astronomer Royal at the Royal Observatory in Greenwich. However, between 1770 and 1810 they were sent out on state-funded voyages of discovery – including Captain Cook’s famous voyages - where they were tasked with testing the feasibility of the astronomical and chronometric methods for finding longitude (Phillips, 2018). Above all, they were expected to produce extensive records of the routes made by the voyages and of the performance of the astronomical and navigational devices on those voyages. The extensive voyage records created by the Greenwich computers were used to produce charts, maps as well as performance histories of the instruments upon which longitude calculation depended. 

Whilst the Greenwich computers selected for voyages had performed the relatively distinct task of astronomical calculation at the Royal Observatory at Greenwich, they came out of a larger system of data gathering that has been described as a system akin to a 'cottage industry', wherein a workforce of human 'calculators', 'computers' and 'comparers' - a workforce which was to a large extent made up of women - contributed from industrialising regions away from, but directed towards the Greenwich Observatory system on the Greenwich meridian line (Croarken, 2003). 

As the demands of imperial and oceanic management grew, so did the demands on the dispersed system of human computation. By 1820 the use of computation and astronomy had widened to encompass a broad range of domestic and imperial financial, actuarial and commercial pursuits. As William Ashworth has shown, from the 1820s, the group of astronomers that formed the Royal Astronomical Society argued that astronomical book-keeping ‘represented the most successful and perfectly formed system of maintaining records’ (Ashworth, 1994). One of the most visible of those ‘business astronomers’ was Charles Babbage, whose belief in the importance of astronomical tables in underwriting the managerial operations of British commercial activity, led him to criticise the human-made errors that were detected in the Nautical Almanac. As such, Babbage proposed replacing error-prone human computers with a mechanical substitute. In this way Babbage’s calculating engine formed part of the growing connection made between technology and the promotion of economic development (Ashworth, 1995; Berg, 1980). As Babbage’s ally at the Royal Astronomical Society made clear, the mutually reinforcing trends of technological and economic development were predicated on substituting mechanical calculations for human calculation:

By the help of the machine ... the manual labour vanishes, and the mental labour is reduced to a very insignificant quantity. For, as I have already stated, astronomical tables of every kind are reducible to the same general mode of computation ... there is no limit to the application of it in the computation of astronomical tables of every kind. (Baily, 1823)

The central concern for Baily and Babbage was that manual labour and computation could produce errors in tables upon which commercial activities relied and could be described. The stakes were highest with regards to Britain’s oceanic trade, upon which so much of its imperial enterprise depended. As such, the proposed substitution of the mechanical computer for human astronomical labour was to allow maritime trade to be managed and visualised with some degree of accuracy. 

As in the case of canals, the degree to which the British oceanic-imperial system was judged to be working was assessed primarily through its ability to accommodate data-making practices. Much like the canal gaugers, the task of human computers was to make a system that was potentially disorderly appear rational and calculable. As such, the computational labour that went into both the canal and longitude systems required making technologies perform in ways that could accommodate and be absorbed into the management of trade systems. In this way, the economic management of trade came to be possible through the arrangement of technologies to promote record keeping to make computation possible. By the early 1820s these 'wet' computational practices had gained such a degree of importance for the management of commerce, trade and empire that people like Babbage and Baily could claim that computation was the solution for - not just the product of - careful relations between labour, technology and management.

Conclusion

It was suggested at the beginning of this short essay that basis on which experts like Bill Phillips could claim to manage and represent economies through computable water flows was based upon the twin projects of conflating water flows with economic flows through ingenious technological projects that emerged from the late eighteenth century. Central to this task was to accommodate and encourage the practices of measuring and recording so that these projects could appear at least moderately predictable and routine based. In the early stages of this ‘uniformity project’ the measurements and calculations were dependent on typically anonymous human labour. By the 1820s, the work of those human computers, calculators and gaugers and the aims of the states upon which their work relied had gained such centrality, that technological solutions were turned back onto the people who had made the systems work in the first place. That this process was so connected with projects to make both water and trade flows appear predictable is something that Bill Phillips and his colleagues did not acknowledge in their description of the MONIAC, but by inverting the logic were performing the same argument: that to understand the workings of an economy, waterflow had to make predictable and measurable through computational means.


Bibliography

Alder, Ken (Aug. 1998). “Making Things the Same: Representation, Tolerance and the End of the Ancient Regime in France,” Social Studies of Science, vol. 28, no. 4, pp. 499-545.

Ashworth, William J. (1994). “The Calculating Eye: Baily, Herschel, Babbage and the Business of Astronomy.” The British Journal for the History of Science, vol. 27, no. 4, pp. 409–41. http://www.jstor.org/stable/4027624.

Baily, Francis (1823). “On Mr. Babbage's new machine for calculating and printing mathematical and astronomical tables.” Astronomische Nachrichten, vol. 1, no. 46, pp. 409-422.

Baker, Alexi; Dunn, Richard; Higgitt, Rebekah; Schaffer, Simon; and Waring, Sophie (2025). The Board of Longitude: Science, Innovation and Empire. Cambridge: Cambridge University Press.

Barr, N. (2000). The history of the Phillips Machine. In R. Leeson (Ed.), A. W. H. Phillips: Collected Works in Contemporary Perspective (pp. 89–114). chapter, Cambridge: Cambridge University Press.

Berg, Maxine (1980). The Machinery Question and the Making of Political Economy 1815-1848. Cambridge.

Burton, Anthony (2015). The Canal Builders: The Men Who Constructed Britain's Canals. Pen and Sword.

Cook, Andrew S. (1985). “Alexander Dalrymple and John Arnold: Chronometers and the Representation of Longitude on East India Company Charts,” Vistas in Astronomy, vol. 28, no. 1, pp. 189–95.

Corkhill, Anna (2012). “‘A superb explanatory device’ The MONIAC, an early hydraulic analog computer,” University of Melbourne Collections, vol. 10, pp. 24-8.

Croarken, Mary (2009). Human Computers in Eighteenthand Nineteenth-Century Britain. In Robson, Eleanor and Stedall, Jacqueline (eds) The Oxford Handbook of the History of Mathematics (pp. 375-403). Oxford.

Pimentel, J. (2015). A Southern Meridian: Astronomical Undertakings in the Eighteenth-Century Spanish Empire. In: Dunn, R., Higgitt, R. (eds) Navigational Enterprises in Europe and its Empires, 1730–1850. Cambridge Imperial and Post-Colonial Studies Series. Palgrave Macmillan, London.

Hassell, John (1819). Tour of the Grand Junction. London.

Mokyr, Joel (1990). The Lever of Riches: Technological Creativity and Economic Progress. Oxford Press.

Phillips, A. W. (August 1950). “Mechanical Models in Economic Dynamics.” Economica, New Series, vol. 17, no. 67, pp. 283-305.

Phillips, Eóin (2020). “Our trusty friends? The place of technology in global histories,” Past and Present: Political Economy and Culture in Global History. https://pastandpresent.org.uk/our-trusty-friends-the-place-of-technology-in-global-histories/

Phillips, Eóin (2021). ‘Relojes y cargamentos: tiempo, viajes e imperio en la era de la industrialización’, Sabers en accióhttps://sabersenaccio.iec.cat/es/relojes-y-cargamentos/

Phillips, Eóin (2018). Instrumenting Order: Longitude, Seamen and Astronomers, 1770-1805. In Macdonald, Fraser and Withers, Charles W.J., Geography, technology and instruments of exploration. Routledge.

Raworth, Kate (2017). Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist. London. 

Sewell Jr., William H. (Spring 1982). “Work and Revolution in France: The Language of Labor from the Old Regime to 1848.” Journal of Social History, vol. 15, no. 3, pp 541–542.

Sobel, D. (2014). Longitude: The true story of a lone genius who solved the greatest scientific problem of his time. Fourth Estate.

 

Phillips, Eóin (August 2025). “Computational Schemes and Technological Routines: A Wet History of the Phillips Machine.” Interfaces: Essays and Reviews on Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 38-53.


About the author: Eóin Phillips is an historian and sociologist of science, technology and economics and assistant professor at La Salle-Ramon Llull University. His work explores the long history of computation and calculation, encompassing the economic, social and political forces that have shaped - and been shaped by - their development. His first book, Making Time Fit: Chronometers, Computers and the Calculative State, 1770-1820 will be published later this year. He gained his PhD at the University of Cambridge in 2016 and has taught at Ruskin College Oxford, the University of Cambridge, and the Autonomous University of Barcelona.



James W. Cortada Author book with small ladder
Figure 1: Book cover of Beyond the Facts. 

It is now certainly clear that we have entered a new step-change period in the development and use of artificial intelligence in its various forms. It is also obvious that computer scientists, programmers, and end users are already integrating recent developments into their daily workflows, although the extent to which remains uncertain, for now. Historians can point to other surges in digital adoptions: IBM’s System 360-370 mainframes between roughly 1966 and 1975, certainly PCs beginning between 1983 and 1985, digital cameras in the 1990s, the Internet by the public after 1998-1999, and smartphones almost from day one in 2007. Therefore, what is happening with AI should come as no surprise. 

However, what is surprising are growing streams of comments from AI experts that all the information their innovations are consuming is insufficient to train their software, that the Internet has been vacuumed dry of data and that now to move forward they have to invent new data, even giving it a name: Synthetic Data—a fancy term for what information simulators or algorithms create in of real-world versions. Leave aside the fact that synthetic data will probably cause some scholars to ponder whether that is digital fiction or hypothetical “fictions of the mind.” But whether real or synthetic, it is all constructed as data—facts, figures, signals, numbers, etc.—with increasingly complex texts that attempt to convey their implications for human consumption. But coming back to the historian’s insight that technologies surge and do not just continuously evolve incrementally, how do AI’s recent developments interact with or sit within broader contexts of the evolution of information? Just as important, how do humans view it? 

IBM mainframe installation
Figure 2: IBM mainframe installation.

Scholars in various disciplines have yet to engage in such a conversation, although much research and thinking have gone into understanding the nature of information, and increasingly of its rapid evolution and increase in volume since the arrival of the Second Industrial Revolution in the mid-1800s. This was driven by faster transportation (e.g., steam ships and railroads), the introduction of electrical communications (e.g., telegraph, telephone, radio, TV, Internet, smartphones), and data processing (e.g., adding machines, tabulators, mainframe computers, satellites, PCs). Others had been contemplating the nature of information for literally thousands of years, notably first by clerics, then lawyers, just as important over the past 2000 or more years, philosophers, and more recently scholars in the humanities and social sciences in the twentieth century. 

To summarize where we are today, these experts generally agree that there are two types of information: explicit and tacit. The former is data and facts, such as that my name is Jim Cortada, that I was born on September 7th, or that the temperature is 72 degrees Fahrenheit in Minneapolis. These are facts, except perhaps the temperature in Minneapolis, which is more often colder than that. A feature of explicit knowledge is that it can be communicated exactly. One can write down my name and birth date, and email it to someone else, and it would be understood. User manuals for computing or operating a lawnmower can also be communicated and have it correctly understood. Nothing seems better for computing and AI than explicit data.

What is Tacit Knowledge?

Tacit knowledge cannot be so communicated because it is fuzzy, ill-described precisely, and cannot be learned by simply reading it. For example, nobody has been able to document a set of instructions for learning how to ride a bicycle that, when read and applied, will make a person ride one. To learn to ride a bicycle, one must learn through the experience of attempting to do so many times. So, a parent must run alongside a child holding the handlebars and seat until the little one “gets it” and one day rides without falling. Doing specific things in sports fits into this category, too: hitting a home run in baseball, shooting a 3-pointer in basketball, or knowing exactly when to pass a soccer ball to a teammate to kick a goal. Theory is fine, but it must be learned through practice and experience, not by reading a text or even hearing a coach instruct one. AI and computers do well in acquiring and using explicit knowledge, but it is now beginning to encounter the problem of tacit knowledge, hence perhaps the attempt to overcome its ambiguity and fuzziness by acquiring—even inventing—data in the hopes that more of it will lead to the benefits of tacit knowledge.

But why should we care about tacit knowledge? To begin with, human experience has made it clear that there is more tacit knowledge “out there,” or to be created (derived), than there is explicit knowledge. Humans know this and the older they are, realize its value and that they do not fully understand what it all means. However, they know it when they see it. They have their descriptors for it served up by philosophers, clerics, and grandparents: wisdom, experience, instinct, or gut feel. That wise old relative or retired friend who can answer credibly questions one has is familiar to all readers: the retired colleague who when asked “When should I retire?” responds with, “You will know when it is time,” and in hindsight be right; the grandparent who knows exactly what corner of a lake to go fishing at and at what time, which changes with the seasons, the person who responds to a proposed action and says, “I don’t think that is going to work; I sense that your timing is not quite right,” and so forth. 

We know people who are wise, who just seem good at “connecting the dots,” that is to say, can combine explicit knowledge (i.e., facts, hard information) with something else, which can be contextual knowledge, experience, or something going on in the brain to provide an integrated point-of-view about an issue. It is the integration of explicit and tacit knowledge that is so powerful. Brain scientists have documented that humans integrate multiple types of information, and do not normally apply computing’s penchant for doing so with vast quantities of data and identifying patterns in the nature and use of information. A child attempting to catch a ball thrown in their direction does not perform a series of mathematical calculations to determine where to be to catch the ball; they go heuristic on us and sense where they must go and do this faster than many computers and with far less data. Over time, they will get better at positioning themselves—another tacit knowledge function.

So what? Developers and users of AI will need to somehow integrate explicit and tacit knowledge if the software is truly to be incrementally effective over older methods and if, as its advocates predict, it will be as intelligent or more so than humans. That means technologists are going to have to learn a great deal about the nature of tacit knowledge, and that action suggests they have to go mainly to the philosophers for whatever is known, because they-the philosophers-have—have a virtual monopoly on the topic. Theologians, too, claim such a dominance, but when you look at what they do, they, too, are philosophers. The problem with doing so is largely the same we face today in all academic disciplines: philosophers largely talk to each other, so they have their language, code words, definitions for terms, theories, concepts and frameworks that they largely assume their colleagues understand and so they can hike through the tall grace of arcane issues. The rest of us do not want to do that, rather just absorb and apply some “right level” of appreciation, much like the person who asks you “What time is it?” and does not want to hear a lecture on the accuracy and underlying technology of the watcher wearer’s particular time piece. To make matters more complicated, historians of philosophy and information know that there is much to be learned by studying Aristotle, Plato, Kant, and certain philosophers from the past two centuries. 

Those individuals from many centuries past knew nothing about computers or modern disciplines, or did they understand underlying parallel realities? It is an intriguing question, but given what historians have always said—that context profoundly affects everything we do—how can today’s digital community avoid engaging with earlier and other purveyors of insights into tacit knowledge? They cannot avoid that reality. Most historians and commentators about any facet of information and digital anything prefer to study explicit knowledge, because it is explicit and so easier to work with than to engage with the much more difficult, fuzzy topic of tacit knowledge. 

A personal experience illustrates this point. I have written over a dozen books dealing with various forms of explicit knowledge, such as computer applications, the role of information in society, fake facts, the history of the evolution of information since the 19th century, today’s facts, and how to manage any corpus of data and facts. But I also avoided like a plague any attempt to write a book, let alone articles, about anything tacit, because it was difficult to do explicitly, and readers want explicit discussions. Scholars always try to make explicit that which is implicit or tacit; it is what they do, it is also the declared intention of the role of AI to make the unknown known, when in fact what is unknown is still unknown until known, at which time it is explicit. It took until I had gone through decades of experiences in all manner of topics (work, life, parenting, scholarship) to reach the circumstance where (a) I was beginning—emphasis on the word beginning—to develop some understanding of what tacit meant and had read just enough philosophy to appreciate what the good philosophers and today’s professors of philosophy are working on and (b) dared to publish on the topic without caring if I was about to destroy any scholarly credibility I might have earned through my prior work on explicit information. The potential positive or negative consequences of foraging in the tacit was so ill-defined, because the subject itself was so unclear and fuzzy that I could not dare have attempted such an initiative in my 40s, 50s, or 60s. That is a shame because I now know enough that computer scientists and AI experts, who are many decades younger than I, could and should explore tacit knowledge. After all, they will need to apply it to the evolution of AI in the years to come.  

Boundaries and Linkages between Tacit and Explicit Information

The most basic reality of explicit and tacit knowledge is that they are related, entwined, and simultaneously used by people. AI has already started to do the same and clearly will find ways to both integrate and coordinate them in some combined way if it is to compete with what the human brain already does, including the child playing in a sport involving the movement of a ball. So, understanding the boundaries and linkages between the explicit and tacit is a crucial early requirement of anybody studying the role of information and the use of AI. Why? Because, as mentioned earlier, people want specificity about a topic, because without it they prove reluctant to take many actions. People do not want undisclosed, under-represented, or undervalued information floating around. Economists want to understand the tacit knowledge of value to economies and organizations. Business leaders have spoken for half a century using Peter Drucker’s use of the phrase “knowledge management,” one originated by economists. Thoughtful people do not want an “elevator answer,” unless an answer is buttressed with underlying appreciation for what they just heard. People want the power of prediction based on knowledge. Historians abhor doing that because, without always explicitly saying so, they understand that many ill-understood, unrecognized circumstances roam the land in the guise of ambiguity, otherwise known as tacit knowledge.

So, research on where explicit and tacit knowledge intersect is vital. Set aside the ambiguity and complexity of the subject that I and others encounter when we look at the topic. I wrote an entire book describing much of this, and do not know how much insight I contributed, again because of the ambiguity involved (Beyond the Facts). But we know a few things briefly mentioned here. A fact is explicit when someone (or discipline) declares it to be so. Thus, it is socially constructed. Academic disciplines created islands of information on which they declared facts to be facts using their language, frameworks, and theories. For each, insights others might have on other information islands may be tacit or simply yet to be discovered. So, one boundary is where one disciplinary island ends and another begins. Philosophers sometimes refer to such declarations as “fictions of the mind,” meanwhile, reality does its thing regardless of our perceptions.

People practice tacit with explicit knowledge and are becoming increasingly aware that they are doing so. For example, scientists practice both types in research laboratories and knowingly teach the two to young researchers without codifying them, say, in scholarly articles, but applied. Writers and artists do the same, relying on tacit knowledge to describe a character or action, relying on emotions too.

iphone
Figure 3: An early version of the Apple iPhone. 

Imagine a bridge with islands of information on one side—the world of explicit knowledge, known absolute facts—and on the other side, anything that is tacit, unknown but acknowledged to exist. Our objective is to haul the tacit over the bridge to the explicit—the overarching mission of the pursuit of knowledge, of the adoption of scientific methods of research. The modes of transportation across this metaphorical bridge are varied, and it appears overwhelmingly “tacitish” in nature. These include rituals, culture (values, interests), human languages, movements (physical, social), academic disciplines, sexual and social orientations and identities, intellectual and disciplinary collaborations, laws of nature and human experience with them, and now increasingly AI. So, it appears there are many modes of transportation back and forth across this bridge. Each has been studied to one degree or another, but our challenge now is to recognize that they operate simultaneously in some manner. So, we have much to learn about this traffic if AI is to at least compete with human brain functions. One can observe that this traffic is a mess: metaphorically a mix of racing cars and donkey carts, bicycle riders and others riding in fancy carriages, yet still others jogging or walking, and all doing it at the same time. This is just as the child is figuring out where to be to catch a ball while simultaneously running toward its anticipated arc and thinking about what the other players are doing.

Back to IT, however, as it has its role to play. Data stacks, relational databases, and data pipelines are shaping modes of transportation between explicit and tacit bodies of information. I think these are the rudimentary foundations of AI that one should recognize. Such tools as SQL queries, cloud repositories, and the filtering of data through analytical frameworks are part of the architecture of our bridge. Data stacks alone make possible the business successes of Amazon, Berkshire Hathaway, Alphabet, JP Morgan Chase, and Microsoft, among many others, as, for example too in the pharmaceutical industry. Everyone is looking for patterns of activity, which is converting data into explicit forms of tacit and explicit knowledge—insights, wisdom. IT platforms are also part of the infrastructure for bringing explicit and tacit knowledge together more closely. 

Increasingly evident to us moderns but well-known to philosophers, clerics, and societies long before the arrival of current times, is the role of storytelling. It is how humans for tens or possibly hundreds of thousands of years transmitted largely tacit knowledge with explicit examples to make a point, to teach a child new skills, and to demonstrate a sound practice. Stories work well, too, if they reinforce previously accepted views of how the world works. We are going to have to find ways to weave that insight into AI. Sociologists and cultural anthropologists may need to become AI gurus. Storytelling is part of the bridge.

Some Intriguing Current Questions to Ponder

Jeffrey Yost has, like so many others, engaged in digital technology and social issues, been observing recent developments regarding AI, and has raised a few questions relative to tacit knowledge that can suggest further how the tacit and AI may intermingle. He reminds us that Amazon workers are training AI in incremental steps, demonstrating their need to combine both human and computational knowledge to perform specific tasks. It is the sort of work being explored by such scholars as Mary Gray, Siddharth Suri, and Lilly Irani, among others. They are practicing the art of bridging between humans and machines in today’s context.

Labor surveillance is another related topic. Shoshana Zuboff has waxed extensively on the topic, but observers of labor behavior have been doing so since the dawn of the twentieth century, notably in the early years by the Taylorism/Scientific Management observers. They were all trying to understand and describe tacit behavior in explicit terms before making recommendations on what to do about their insights. A tacit debate ensued for over a century about how much insight such exercises offered up. The factory floor observers were able to improve the productivity of workers, but at the expense of such actions as repetitive movements, while by the end of the century, concerns about the quality of work environments and the impact on freedom of expression were now in the mix. In other words, all encountered the move to the explicit timing of work tasks, for example, to discussions about the role of digital spying on the attitudes and behaviors of citizens, the latter essentially moving across the bridge into the world of the tacit. AI will increase the ability of organizations to do both types of labor analyses, which in turn will perhaps make robotics more widely applied (that is happening now) while also influencing managerial attitudes and policies of managing concerns of labor regarding how they work and their role in modern political circumstances. My research suggests increased use of robotics exercising decision-making capabilities will broaden as they are further equipped to deal with greater forms of ambiguity, rather than by operating through hard-written rules, which has been more the norm for the past half century. Both robotic mechanical functions, when wedded to learning AI applications, represent a large frontier of opportunities for scholars to study, employees to implement, and entire industries to transform.

But an open question related especially to laborers and their unions is what role AI has played in their lives? We do not yet know the answer, other than automation has long been associated with both limiting employment in some jobs while normally increasing even more other professions and skills. Unions tend to resist automation, fearing it as being disruptive, but economic historians have shown that automation improved job quality and in some industries expansion (e.g., transportation, consumer goods, chip manufacturing). In agriculture in the United States, technological innovations have spectacularly improved the productivity of farmers, making it possible to have 80 percent of workers in the mid-19th century employed on farms to less than 2 to 3 percent today, depending on whose numbers one cites. However, all of that was largely achieved in pre-modern AI times. Yet farmers today are aggressively applying all manner of IT to their work, including earlier versions of AI. However, there is a long history of labor historians discussing fears of automation; one can expect that debate to continue for decades, because current developments in AI will affect the nature and quantity of work done by humans, and it is never clear when a new technology appears how that will do so.

A final issue, alluded to earlier regarding sports, is the intensive use of data analytics in schools, universities, and professional sports around the world to improve the performance of athletes, especially professional ones. All of this was made evident with the reporting by Michael Lewis in his book Moneyball, which was subsequently made into a movie. Today, massive quantities of explicit information are collected on all manner of activities of a player, the results of specific tactical and strategic actions taken on the field, decisions made by coaches, responses of fans, and results in advertising. Data analytics is a core component of all manner of AI and is one of the earliest and most pervasive uses of this kind of computing. One can expect such use of analytics to spread rapidly across more jobs and industries than has already an extensive done. My research suggests that as appreciation for the evolving forms of AI grows, we will see more of this. The challenge has historically always been to determine what specifically should some new form of IT be applied? That is a gating factor in how IT is deployed. AI can and does already look over its back fence to see what other professions and industries do, far better than humans have done over the previous century. How that occurs will, of course, be a task historians and IT professionals will track as they unfold. Note that AI has reached a level of sophistication that we can give it agency; it has the capability of doing investigations without human permission, let alone initiation.

Some Final Personal Thoughts

Earlier, I suggested that I had joined the ranks of those increasingly interested in the role of tacit knowledge, initially as part of my ongoing analyses of how information had evolved over the past two centuries. Given the rapid evolution of AI in the past two decades and the increased computing capability and speed of computers when combined with emerging forms of AI, it became clear that tacit knowledge could no longer be left to the priests and philosophers, to professors of literature, linguists, cultural anthropologists, and sociologists. Individuals working in the hard sciences, business professionals, humanists, and across all the social sciences now need to better understand the characteristics of tacit knowledge and how to leverage it. That is what the continuous evolution of IT, and more specifically AI, has wrought. The public at large must now come to understand tacit knowledge and not simply acknowledge the role of explicit facts, fake facts, and much of the nonsense they are exposed to on social media. That realization, I believe, is a new reality for all. For that reason, I have now written a short introduction to the subject, aimed at the public at large, but with deep respect for the norms of the academic and professional communities. I wrote Beyond the Facts: Tacit Knowledge and the Hidden Infrastructure of Today’s Informed Times to address these audiences. There, I discuss in more detail the idea of islands of information, define tacit knowledge, the notion of bridges, and how tacit knowledge is applied today in business, government, and by people at large. Since many employees are using knowledge management and critical systems thinking, I address those subjects, too, because of the rapidly impinging role of AI in those practices. I discuss the role of the tacit at the individual level and conclude with a discussion of how explicit and tacit knowledge interact.

I broke out of the confines of the philosophers’ world of tacit knowledge to speak to a wider audience. Because this is tacit knowledge, it is difficult to sense whether the topic was properly covered in sufficient detail to pander to all of our preferences for explicit explanations. That is the ultimate dilemma we face when discussing the tacit in a world in which AI is acquiring abilities to go back and forth across our metaphorical bridge. We will have to tell each other how that is going.


Bibliography

Atkinson, Robert D. and David Moschella (2024), Technology Fears and Scapegoats (Palgrave).

Collins, Harry (2010), Tacit and Explicit Knowledge (University of Chicago Press).

Cortada, James W. (2016). All the Facts: A History of Information in the United States Since 1870 (Oxford University Press).

Cortada, James W. (2026), Beyond the Facts: Tacit Knowledge and the Hidden Infrastructure of Today’s Informed Times (Rowman & Littlefield).

Gray, Mary L. and Siddharth Suri (2019). Ghost Work: How to Stop Silicon valley from Building a New Global Underclass (HarperCollins).

Irani, Lilly (2015). “Difference and Dependence Among Digital Workers: The Case of Amazon Mechanical Turk.” South Atlantic Quarterly, 114 (1), 225-234.

Muglia, Bob and Steve Hamm (2023), The Datapreneurs: The Promisse of AI and the Creators Building Our Future (Peakpoint Press).

Yost, Jeffrey R. (2023) “Bots, Rhymes and Life: Ethics of Automation as If Humans Matter.” Blockchain and Society. [www.blockchainandsociety.com].

 

Cortada, James (May 2025). “When Tacit Knowledge Meets Artificial Intelligence” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 27-37.


About the author: James W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing. He is the author of Birth of Modern Facts (Rowman & Littlefield, 2023), Today’s Facts (Rowman & Littlefield, 2025) and Inside IBM Columbia University Press, 2023). He is currently conducting research on the role of information ecosystems and infrastructures.



The Business History Conference (BHC), now more than seventy years old, continues to thrive as exemplified by a tremendously engaging meeting in Atlanta March 13-15, 2025. BHC’s program has evolved in diverse and wonderful ways in recent years and decades to include and embrace ever more labor, gender, race, environmental, and cultural and intellectual history of businesses, organizations, enterprises, and political economy.

Room of panelists and audience members at 2025 Business History conference.
University of Florida’s Paula de la Cruz-Fernandez standing at the front left of the photo, closing out the session she organized and led on oral history. Seated up front are the other members of the panel: Harvard’s (HBS’s) Geoffrey Jones, UVA’s Olivia Paschal, UFL’s Sean Patrick Adams, CBI, UMN’s Jeffrey Yost, Hagley’s Benjamin Spohn, and African Business School’s Laurent Beduneau-Wang (who is blocked from view in this image by an audience member).

A Business History Oral History Roundtable

The Charles Babbage Institute for Computing, Information and Culture had a strong representation at this year's BHC, given our past and current fellows (from our doctoral fellows to our mid-career and up “research fellows”), and I was delighted to participate as well. On Friday, March 14, I had the honor of being part of a terrific Roundtable on oral history in business history, entitled “Oral Business History: Recent Approaches.” Standout business historian, University of Florida’s Dr. Paula de La Cruz Fernandez, author of the excellent book Gendered Capitalism, organized the Roundtable. In addition to conceptualizing the panel and convening the group, she presented her own oral history work, which fascinatingly focuses on immigrant entrepreneurs, and labor and gender business history in the State of Florida.

I had the opportunity to talk about gender history of business, technology, and work as part of my own recent projects, as well as other efforts, past and present, at CBI. I stressed how gender studies and women’s history of computing and software have been a priority of my research as Director, that of my predecessor Thomas Misa, and of many of our research fellows. This has included CBI oral histories, sponsored projects, and publication efforts on women programmers (Misa), and my own oral history, sponsored projects, and writing projects (including my 2017 MIT Press book Making IT Work) on women software and services entrepreneurs, and women pioneers in cybersecurity and privacy. I highlighted how fundamental oral history is to generating collection development opportunities for the CBI Archives in general, and in securing important archival collections of women pioneers in computing, software, and services. In addition to our interest in women’s achievements and experiences in rising to leadership in computing, software, entrepreneurship, and business, we are also very interested in documenting and analyzing women’s labor and work history in programming, manufacturing, engineering, and data processing, including and especially barriers, discrimination, resistance and agency, and gendered environments.

Along with the honor of presenting my and CBI’s work alongside Paula’s, I was honored to be in the company of the rest of the highly distinguished panel. This, of course, includes leading business historian Harvard Business School’s (HBS) Geoffrey Jones. Geoffrey Jones, a past President of BHC, co-leads Harvard Business School’s Creating Emerging Markets, a wondrous, continuing project, now more than a half dozen years old, which focuses on the Global South. (I encourage everyone to read Geoffrey Jones’ and Taarun Khanna’s engaging and important oral history book, which grew out of the project, Leadership to Last: How Great Leaders Leave Legacies Behind.)

About a year ago, I had the opportunity to attend Harvard Business School’s “Oral History and Business in the Global South Workshop,” following which I published a lengthy review essay on this path breaking special event in Interfaces: Essays and Reviews in Computing and Culture

Leadership to Last book cover

 

One especially interesting element of the project and Jones’ BHC presentation was that in addition to the incredible scholars conducting these interviews, and the impactful entrepreneurs being interviewed (more than two-hundred interviews to date), HBS is utilizing artificial intelligence to create content retrieval and generation augmentation tools drawing from this oral history transcript database. The oral histories, incredibly rich resources on multiple regions of the Global South, are professionally video recorded. Not surprisingly, many of the interviews have achieved substantial classroom as well as research use. It is likely that the new AI tools, carefully controlled, drawing from smaller, limited datasets (the project’s oral history database), will aid researchers’ and educators’ use of this valuable resource.

Other distinguished participants on the panel included University of Florida’s Sean Patrick Adams, Hagley Museum’s Benjamin Spohn, African Business School’s Laurent Bedneau-Wang, and University of Virginia’s Olivia Paschal. I found Paschal’s discussion of her oral history work interviewing past employees of the giant Arkansas-headquartered companies (including Walmart and Perdue Farms) particularly intriguing. Paschal stressed the outsized impact these firms have had on labor, culture, and life in the relative rural state of Arkansas. 

By design, there was a lengthy question and answer period, and this conversation was especially rich in content. Among the topics discussed was business history and memory. From the audience, Florida International University Professor of History and CBI Research Fellow Kenneth Lipartito expressed how wonderful it would be to have a future BHC panel on business history and memory or business oral history and memory. Excitingly, preliminary planning by participants in this year’s oral history panel is already in the works for such a proposal for next year’s BHC in London.

Cover of Surveillance Capitalism in America book

 

Exploring Surveillance and Political Economy

In an important session on “Surveillance at Work” and corporate control, held on Friday afternoon, Lipartito presented a paper entitled, “The Chandlerian Panopticon: Surveilling Workers and Managers in American Railroads, 1850-1890.” Among the papers in this session were two presentations related to the telecommunication and computing industries. Princeton’s Bianca Centrone discussed housing at Italian firm Olivetti, and University of New Hampshire’s Josh Lauer gave a paper entitled, “Disciplining Telephone Users: Telephone Talk and Instrumentation of Personal Communication in the United States, 1880-1920.”

Half a decade ago, Lipartito and Lauer teamed to give one of our six deeply insightful keynotes at CBI’s two-day Just Code symposium. Shortly thereafter, they published their edited volume Surveillance Capitalism in America, a tremendous book extending from a Hagley Workshop, and published in the museum and library’s associated book series, Hagley Perspectives on Business History and Culture, with University of Pennsylvania Press (2021).

Opposite our oral history panel session, Anne McCants of MIT was Chair and Discussant of a session entitled “Construction, Collaboration, and Collectiveness: Public and Private Partnerships in Modern Business History.” One of the papers I especially missed seeing in this session was by Smithsonian Curator of Computing and past CBI Tomash Fellow Dr. Andrew Meade McGee. Andrew presented, “High Technology Aerospace Entrepreneur Versus the Expert Labor of the Federal Workforce: Roy Ash and Nixon-era Conglomerate Approaches for Federal Government Regulation.”

Seated at a table are Corinna Schlombs, CBI Senior Research Fellow Dr. Jim Cortada, and The Ohio St. University Prof. David Stebenne.
Past CBI Tomash Fellow and Rochester Institute of Technology Prof. Corinna Schlombs, CBI Senior Research Fellow Dr. Jim Cortada, and The Ohio St. University Prof. David Stebenne.

Big Blue and Labor History

On Saturday, there was an excellent panel “Working at IBM” that included papers by past CBI Tomash Fellow and Rochester Institute of Technology Professor of History Corinna Schlombs and CBI Senior Research Fellow James (Jim) Cortada.

Cortada spoke on IBM’s compensation to employees and lifetime employment in exchange for foregoing unionization, its famed “grand bargain.” It was a bargain that held strong for half a century plus before the corporation broke the deal when it hit more difficult competition in recent decades, and it started large-scale layoffs in the U.S. and fast growth in India and some other lower cost labor countries. Jim’s presentation followed that of Professor of History at The Ohio State University David L. Stebenne’s discussion of the early history and contexts of the “grand bargain,” in IBM’s formative years through to the 1960s. With this common theme, the two talks worked very well together.

Corinna Schlombs’ presentation, quite importantly, provided a different angle on the labor and work history at IBM to round out this fantastic session. She concentrated on key punchers and key entry operators in punch cards and digital computing. Her paper, entitled “Data Entry Challenges: IBM Work and Technological Change,” provided an especially compelling analysis of women and gender history at IBM in data entry, which, despite the existence of a large IBM historiography overall on this storied corporation, has been wholly ignored previously. Corinna’s BHC paper is research out of her larger labor and gender in information technology project funded by the National Science Foundation, one in which she has conducted research at CBI. We are also delighted that Corinna plans to donate her oral history interviews on the project to CBI. University of Georgia Tech Professor Emeritus, Past President of the Society for the History of Technology, and longtime friend to CBI Steven Usselman chaired and served as the discussant for the session, offering great insight on the political economy and antitrust context of IBM.

There was substantial and impressive content on business and automation, including a session on Saturday simply titled, “Automation.” This session, chaired by University of Maryland’s David Kirsch, with the discussant MIT’s Ellan Spero, ranged from papers looking at the concept broadly, like University of Southern California’s Salem Elzway’s “Automation: The Past, Present, and Possible Future of a Concept,” to explorations of automation in factory and office settings in particular companies, industries, or endeavors. With the latter, recent CBI Tomash Fellow, Princeton University’s, and the Institute for Advanced Study’s Dr. Alex Reiss-Sorokin presented an important paper entitled, “The Computer in the Law Firm: The Early Automation of Legal Research Work, 1964-1970.” 

Norberg Grantees

Recent CBI Norberg Travel Grant recipients (who conducted research in the CBI Archives—an unparalleled set of 320 collections spanning information technology and the digital world) also gave papers on the BHC 2025 Program. This included Harvard’s Mark Aidinoff, Columbia’s Ella Coon, and  Johns Hopkins University's Jacob Bruggeman, with their compelling historical research on technological federalism, electronic assembly in Korea, and hiring hackers, respectively. Newly awarded Norberg grantee, Ethan Dunn of Rutgers was also on the program, presenting on the American Bankers Association. Ethan will be drawing from our rich materials in the Burroughs Corporate Records and other CBI collections with materials on banking in archival research he will conduct later this year.

*            *            *

As usual at the Business History Conference, there were multiple strong plenary sessions. In addition to those with talks by or panels of senior scholars, The Kross Dissertation Prize Plenary Session is always a major highlight. One of the elements I love most about the BHC is that it is small enough (a few hundred scholars) to have a very inviting culture, including and especially to doctoral students and junior scholars, while being large enough to meet and network with new contacts at all career stages who are doing fascinating research.


Bibliography

“Charles Babbage Institute for Computing, Information and Culture Oral History Program and Resources.” CBI Oral Histories (umn.edu)

Cortada, James W. (2019). IBM: The Rise, Fall, and Reinvention of a Global Icon (MIT Press).

“Creating Emerging Markets,” Harvard Business School. “Creating Emerging Markets” - Harvard Business School (hbs.edu)

de la Cruz-Fernandez, Paula. (2021). Gendered Capitalism: Sewing Machines and Multinational Business in Spain and Mexico, 1850-1940 (Routledge).

Jones, Geoffrey, and Tarun Khanna. (2022). Leadership to Last: How Great Leaders Leave Legacy’s Behind. (Penguin Business).

Lauer, Josh, and Kenneth Lipartito. (2021). Surveillance Capitalism in America (University of Pennsylvania Press).

Misa, Thomas J., ed. (2010). Gender Codes: Why Women Are Leaving Computing (Wiley/IEEE).

Schlombs, Corinna. (2019). Productivity Machines: German Appropriation of American Technology, from Mass Production to Automation (MIT Press).

Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry (MIT Press).

Yost, Jeffrey R. (2024). “Harvard Business School’s ‘Oral History and Business in the Global South’: A Review Essay and Reflection.” Interfaces: Essays and Reviews in Computing and Culture v.4, https://cse.umn.edu/cbi/interfaces#Harvard. [Third down scrolling]

Yost, Jeffrey R. and Gerardo Con Diaz, eds. (2025, forthcoming in Sept.). Just Code: Power, Inequality, and the Political Economy of IT (Johns Hopkins University Press).

 

Yost, Jeffrey R (April 2025). “Business History Conference 2025, and CBI’s Participation” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 20-26.


About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press and is PI of the new CBI NSF grant "Mining a Useful Past: Perspectives, Paradoxes and Possibilities in Security and Privacy." He is author of Making IT Work: A History of the Computer Services Industry (MIT Press), as well as seven other books, dozens of articles, and has led or co-led ten sponsored history projects, for NSF, Sloan, DOE, ACM, IBM etc., totaling more than $2.3 million, and conducted/published hundreds of oral histories. He serves on committees for NAE, ACM, and IEEE, and on multiple journal editorial boards.



parents & Jennifer Family Ties
Figure 1: Elyse is intrigued by the computer baseball Jennifer is playing, while Michael enjoys the analog version.

The beginning of the 1987 Family Ties episode “Matchmaker” starts, as many of them do, in the Keaton family kitchen. Youngest sister Jennifer (Tina Yothers) is sitting at the kitchen table working on a computer. To habitual viewers of the show the computer on the kitchen table is a visible disruption to the typical mise-en-scène. Dad Steven (Michael Gross) strides over frowning, “Jennifer, I told you I don’t want that computer in the kitchen.” Eldest son Alex (Michael J. Fox) looks up impishly, “Dad, computers are part of our lives now, join the 80s [audience laughter] ... join the 70s. [louder audience laughter]” This scene sets up the typical tension of a Family Ties episode. Generational gaps between white hippie parents Steven and Elyse (Meredith Baxter) and their Reagan-era kids Alex, Mallory (Justine Bateman), and Jennifer (and later Andy) create conflict, humour, and ultimately opportunities for family communication and resolution—all in the network required twenty-two minutes. 

Family Ties was one of the most popular television programs of the 1980s. Across its seven-year run (1982-1989), it deftly wove relevant issues like class, alcoholism, and teen sexuality into its sitcom formula. It is not remembered as a technological archive. While there is significant scholarship on more contemporary digital technology and everyday life, this episode is significant because it serves as an example of early, and pre-networked, digital history of everyday life in popular culture. While science fiction films or hobbyist publications are often the more obvious sites for studies of early digital history, numerous 1980s films and television episodes about everyday life featured computers including Cheers, Roseanne, thirtysomething, Pretty in Pink, and Working Girl to name a few. Receiving 22.3% of the audience and ranking first when this episode aired in July 1987, Family Ties was particularly prevalent in the American cultural landscape (“TV Ratings”). Indeed, these viewership numbers outpaced the 8% of American homes that actually had a computer in 1984 and the 15% that owned one by 1989 (Kosinski).

As Bo Ruberg argues about computer dating ads in the personals columns of the 1960s and 1970s, episodes like “Matchmaker” similarly offered computer engagement to just as many, if not more, people than actual computers did in the 1980s. The episode imagines how each member of the family might interface with the computer. With only 15% of American households owning a computer by the end of the 1980s, the narrative is in some ways aspirational. This brief essay evaluates the overlapping storylines about domestic computer use and positions them within larger technological and social economies—particularly in relation to gender. By turning to a moment when many people were still only anticipating the potential personal uses of computers, my analysis of popular culture as a site of digital history emphasizes that non-technical narratives are crucial vectors that shape understandings of technology and their use in American society. 

Elyse & Steven Family Ties
Figure 2: Elyse and Steven start enjoying the computer.

Family Computing

After the initial kitchen interaction, Steven agrees that Jennifer needs the computer for school but asks her to use it in another room. She acquiesces and, gathering up the various components, moves to the living room. Steven runs behind, “Oh not the living room, there’s already an electric clock in the living room [audience laughter].” In this plot line, Jennifer and her parents negotiate the computer’s place and purpose in the home. This intergenerational narrative responds to and anticipates the growing interest in domestic computing culture.

Although early personal computers were overwhelmingly associated with men, by the 1980s computer companies, popular media, and even educational institutions were actively trying to change that perception. Magazines from Family Computing to Redbook touted the computer as the technology for both domestic and professional organization and efficiency. As early as 1983, ads in non-technical women’s magazines consistently listed courses for learning “Computer-Assisted Bookkeeping” and how to “Be Your Own Computer Expert.” Television and print media also featured numerous articles and ads extolling the merits of educational software and video games for children. In 1982 Good Housekeeping ran an article titled “Video Games: These Teach Too” and the following year Atari marketed its program Sesame Street: The Children’s Computer Workshop with an emphasis on preparedness for the dawning ‘computer age’ (Atari). Geared toward a younger female audience, Seventeen also suggested the relevance of computers to girls and young women as early as 1982. An article in the October 1983 issue titled “Get Ready for the Computer Revolution” suggests the need for computer literacy for all youth (Mareoff). Two years later, “Computer-Friendly,” an anecdote sent from a reader, describes her initial fears and subsequent love for the computer—once she learned how to use it (Lee). If only Steven had been keeping up with Mallory’s Seventeens, he might have come around sooner.

The initial reason for the computer being in the home is Jennifer’s academic success—she won’t be a casualty of the computer revolution. While Jennifer is shown working independently on the computer, she also introduces her parents to computer games. With brief instruction from her daughter, Elyse hits a home run in computer baseball and smiles at the “…little computer guy patting the batter on his behind [audience laughter].” This openness to computer games is in stark distinction to a storyline from 1984 where Elyse almost quits her new job due to frustration with the computing expectations of her office. Counter to the notion of ‘computer widows’ or the ‘technically illiterate’ women of past generations, this episode offers a feminist representation of home computer use that is employed not only for academic success but also for mother/daughter recreation (Hilu, Family 203-204; Spigel 116). 

Steven initially opposes the 'silly computer games” even if he can appreciate the computer’s use for education. As Jennifer and Elyse attempt to draw him in he reluctantly agrees, “Okay, one pitch. Just to prove how dehumanizing this game is.” Hitting the keys awkwardly, he attempts and fails three swings. Jennifer monotones, “Strike three, you’re out Dad.” Steven indignantly sits down at the computer and begins hitting the sides with frustration, “What’s wrong with this stupid computer ump [audience titters].” Both Elyse and Jennifer admonish him to calm down. He takes several deep breaths and leans back. He gestures towards the computer as he says, “See how dehumanized I became [audience laughter].” Although this point of view was supposedly laughable by 1987, Fred Turner suggests that it was a legitimate viewpoint just a generation earlier. Many in the counterculture of the 1960s—of which Steven and Elyse are clear representatives—viewed the computer as a potentially oppressive force. As Turner writes, “transformation of the self into data on an IBM card marked the height of dehumanization” (16). However, computer developers through the 1980s, and particularly into the dotcom boom of the 1990s, viewed their technology in very different terms. Growing from the New Communalists’ and Stewart Brand’s sensibility in the Whole Earth Catalog, they positioned the computer in line with counter-culture ideals of democratization and free speech—a technology that could deliver on creative individualism and collaborative sociability (Turner 9-16). Although Steven and Elyse don’t have such an explicit ideological shift, their evolving perspective imagines how former 1960s hippies could move beyond their initial distrust to join the computer age.

The editor’s letter in the October 1985 issue of Family Computing is titled “At first the kids were a cover-up.” The magazine’s editor, Claudia Cohl, goes on to discuss how parents were initially purchasing computers for their children, but as the decade continued, they had “come out of the closet” about wanting to learn how to use one themselves. Although the computer entered the home as an educational device, the episode depicts Steven and Elyse as increasingly enjoying computer sports games together. While romance and sex software for couples did exist in the 1980s, this episode imagines that even seemingly non-romantic computer games could also promote renewed intimacy for long term couples (Hilu, “Calculating” 153-154). As they are playing computer basketball, Jennifer comes into the kitchen and unplugs it, it’s time for the computer to be returned. Elyse suggests to Steven that “You could always put aside your personal feelings and buy her one.” He responds, “You’re right, I should think about her needs first.” They run after Jennifer [audience laughter]. Steven and Elyse are still in a thinly veiled computer closet but the doubled promise of shared romantic leisure and preparing a child for the ‘computer revolution’ ultimately draws them out. 

Alex Mallory Family Ties
Figure 3: Mallory looks on as Alex demonstrates how the computer can find her ideal match.

Alex P. Keaton’s Guide to Computer Dating

The reveal that the computer has been rented for a week and not purchased, in addition to the ongoing debates about where in the Keaton home it should be operated, further signals the larger uncertainty about computers as a domestic technology. As personal computer sales boomed through the 1980s and into the 1990s, who should be using the computer and to what end was a common domestic negotiation. The debates between Steven and Elyse and the varied activities of computer games and Jennifer’s education already reveal the computer’s dynamic function within the household (Cassidy). The second storyline is about the computer as a matchmaking tool, hence the episode’s title. “Matchmaker” is one of many 1980s electronic dating storylines on television programs including Diff’rent Strokes, Three’s Company, The Facts of Life, and 227. Although Mallory dates a variety of men over the show’s run, this episode is notable for Alex and the computer’s involvement in the process. Alex’s attempts and failures to control the process speak more clearly than many contemporary episodes do in regard to the history of computer dating that runs from the 1960s and extends into the 2020s. While computer dating in the form of questionnaires and punch cards had largely waned in the United States by the 1980s—not to be revived until website and app dating in the early 2000s—the prevalence of these storylines indicates a crowded cultural imaginary for electronic intimacy. As in the other storyline, this plot considers how the computer might be a part of everyday life—and what its limitations were.

In their first scene, Mallory cries to Alex that she keeps going out on lousy dates. Alex reassures her that “If you want to have a guy to date, then you should have a guy to date...and I’ll find him for you.” Alex’s ego and conservatism mean that he often misreads social situations or says something inappropriately self-aggrandizing—typically for laughs. While Alex’s offer seems to anticipate a laugh from the live studio audience, in this case, the suggestion that Alex will find Mallory an appropriate date is taken at face value. The next day, Alex has created a computer dossier of eligible young men from his university. He takes Mallory’s input on her desired type of date and sets to work creating a program designed to find the appropriate match. The episode replicates the push of early computer dating services to mitigate accusations that the services were ‘sleazy’ or just meant to facilitate sex by having Mallory tearfully proclaim that she’s “only 17” and looking for a “guy to date.” The reminder of her age and emphasis on dating rather than sex or ‘hooking up’ both appeases NBC standards and practices while reinforcing the respectability of computer dating

While early versions of computer dating framed men as consumers and women as products, notably this episode diverges from that narrative. Mallory is in search of potential male dates. But the episode is hardly subversive, because for both computer and subsequently video dating, legitimacy was also informed by who comprised the pool of potential dates. Nascent computer dating in the 1960s was the purview of mainly white male college students with access to computer mainframes (Gilmor). This cohort of programmers were trying to make the process of meeting women more straightforward and less up to chance. Mar Hicks explains that, as in other modes of early computing, this concentrated power in a core group of white men who approached computerized dating and romance as a way “to replicate existing social patterns and hierarchies even more efficiently” (Weigel 170). Since the rise of online (both website and app) dating in the 1990s, computer dating services can aggregate significantly more data across a potentially broader array of mates through ever more sophisticated algorithms. And yet, the people setting the algorithms for matchmaking have not necessarily changed radically. Dan Slater points out that many of the owners and developers of sites like Plenty of Fish and OKCupid are “business-minded, unemotional math guys” who view dating and love as a fickle product with numerous variables. Ideally these variables can be managed through programming (3-4).

 By having designed the parameters of his program to include likely white upwardly mobile men he knows from college, Alex’s ideal social order becomes even more consolidated. As Alex sets about finding Mallory a date, he creates a computer dossier of eligible young men from his university. He dismisses Mallory’s input on the important qualities in a guy—such as sensitivity and a sense of humour—and instead sets to work running the program that he feels will produce an appropriate match. Mallory’s longest running boyfriend on the show, Nick (Scott Valentine), was a motorcycling environmental artist whose gruff exterior belied his emotional maturity and kind personality. Despite these positive qualities, Nick assuredly would not have been included in Alex’s computer dating pool. 

The audience never sees the computer screen in this plotline nor has insight into how potential dates are cross referenced. But this was consistent with computer dating services in real life which emphasized that a computer was involved, more so than how it actually worked. For some services of the 1960s there is skepticism that a computer was involved at all—instead serving as a marketing ploy while humans did the real matching. While today’s algorithm-based dating apps draw from more data to present potential matches, there is a similar opacity about how the algorithms function. There is significant speculation that similar to streaming and social media sites, dating apps make use of personalization data to show users potential matches based on their existing preferences (Nader 238-239; Voll 16). Audiences of computer dating advertisements and of these shows—as well as potential users of computer dating services—have thus been left to make assumptions about what the computer could conceivably accomplish for matchmaking, rather than gleaning any details. As in other computer dating television episodes, “Matchmaker” moves from Alex and Mallory inputting options into the computer to the night of the date. 

Before the date arrives, Alex espouses the astonishing merits of computer matchmaking. And yet, his comments blur the line between his role and the computer’s in the matchmaking. Alex notes that he has “…handpicked Mallory’s date…one of the most eligible men at Leland College.” But then goes on to say, “I fed your vital statistics into this computer [he pats the top of the computer Jennifer is attempting to work on] and I found out you guys are compatible in everything from breakfast cereal to positions on nuclear disarmament.” The episode asks the audience to imagine that the computer exceeds human capabilities. And yet, Alex’s statements trouble the narrative of omniscient computers by re-asserting the central role of the programmers.

As Mallory’s date Roger (Bill Allen) arrives, it is quickly clear that the two have no chemistry. And yet, Alex continues to try to insert himself into their relationship in the hopes that he can somehow facilitate a successful romantic connection. Playing into the cultural stereotype that programmers lack interpersonal facility, Alex sits between them on the couch with an arm over each as they introduce themselves. Jennifer quips, “Is Alex their translator?” By having the entire family present for this initial encounter, it again limits any potential ‘sleaze’ factor that this iteration of computer dating might have. Despite the lack of connection between Mallory and Roger, Alex convinces Mallory to go on a second date so that Alex can double date with them. But Alex’s attempts to extend the computer match and facilitate a romantic connection end up alienating not only Mallory and Roger, but his date as well. Alex’s insistent involvement not only calls into question how effective computer dating is but also opens the possibility that only those who lack social skill would encourage it in the first place.

Consistent with the same cadence as the other computer dating episodes, this storyline concludes with a post-mortem scene back at the house that offers a discussion of the limitations of computer dating—and meets the sitcom narrative requirement for re-establishing the status quo. As Ruberg has noted about personals advertisements for computer dating, these episodes helped a public audience imagine how a computer might “enter into the everyday personal lives of its users.” As this scene shows, the episodes all seem dubious of computer dating, but not of computers themselves. 

Back at the Keaton house, Alex complains to Mallory, “I just do not get it, why didn’t you two hit it off…you looked so good on paper.” The scene extends to computer dating what Eva Illouz describes as the “two conflicting sets of metaphors” about romantic relationships that increasingly emerged in the context of late capitalism. One draws on modes of consumption to characterize dating, “To create or rejuvenate spontaneity, adventure, fun,” while the other follows the “purposive rationality of the sphere of production” (188) and emphasizes hard work, rational responses, and ultimately the long-term stability of marriage (Baym 75). Similarly, the exchange positions this narrative within a longer discourse about the potentials and limits of both computers and programmers. As early as the 1960s and continuing to the present, there was a frequently ambivalent cultural narrative that the computer could process large swaths of data and yet lacked the acuity to meaningfully process or produce human affect. Similarly, computer programmers were often stereotyped as creative types who nonetheless were anti-social and/or ill suited to existing norms (Ensmenger 143-144). 

Alex perceives the computer as the pinnacle of rationality, able to look past the clutter of human emotions to produce an ideal and quantifiable romantic match. Mallory points out that while the computer has produced compatible matches, it has failed to produce as Mallory puts it, “That magic, that special something has to be there. It defies logic.” Alex responds, “Now we’re getting down to it, I’m made of paper, I mean I like paper, what am I trying to say?” Mallory elaborates on his observation, “You’re trying to say that you’re not good with emotions but you’re great with facts and figures and you love a situation where you can control all the elements, and the human element is removed.” Mallory’s statement aligns the computer with the production mode of romance and as inherently ill equipped to produce the spontaneous chemistry that characterizes the consumption mode of dating. It also positions Alex within a lineage of 1960s and 1970s computer dating engineers attempting to control the elements of dating and extend their worldview through their pairings (Strimpel; Hicks). Conversely, Steven and Elyse’s growing affection for the computer helps imagine how it might add a spontaneous, fun, consumption facet into an existing relationship. 

Conclusion

As one of the most popular television shows of the 1980s, Family Ties was frequently a forum for contemporary issues (Newcomb). This episode introducing the computer to the Keaton home is notably distinct from other “computer episodes” of the same decade because it attempts to address a variety of uses including education, leisure, and potential roles for forming and sustaining romantic relationships. Overall changes in sitcom relationships are often limited and slow growing because of the genre’s largely episodic narrative organization with each episode frequently following a disruption and re-establishment of the status quo (Butler 17; Mintz 42-43). In technologized matchmaking storylines, and this episode is no exception, the narrative requirements of the genre end up evoking a skeptical ambivalence about how useful or successful computers might be for romance. 

Returning to the episode also offers some precursors for our present digital culture. Alex’s inept yet increasingly aggressive attempts to harness computational power to order the lives of women around him raise some obvious parallels to our current moment. Consistent with the larger paradox of the show, the episode brackets how dangerous Alex’s attitudes can be by playing them for laughs thus repeatedly insinuating that they pose no credible threat—which the menstruating people now deleting cycle-tracking apps can attest is a dangerous course (Ries). While a popular sitcom episode seems like a laughably un-technical archive, it illustrates the ongoing gendered, intergenerational, and functional negotiations that continue to shape digital technology use. As ever, popular culture offers a shared context through which to understand how computers are part of our lives—then and now. * 

 

*Author's note: Although “Matchmaker” was broadcast in 1987 near the end of the 5th season, it was written and produced sometime during the 3rd season (1984-1985). There are no available details that I can find as to why this decision was made. To the minimal extent that Family Ties was serialized, this episode would have seemed anachronistic in its initial broadcast. However, this two-year gap does not seem to significantly impact attitudes on personal computing as I present them in this essay.


Bibliography

“Atari.” (November 1983). Advertisement, Parents, 28–29.

“Atari.” (November 15, 1983). Advertisement, Woman’s Day, 119.

Baym, Nancy K. (2015). Personal Connections in the Digital Age. 2nd ed. (Polity Press).

“Be Your Own Computer Expert.” (June 1983). Magazine advertisement, Redbook

Butler, Jeremy G. (2019). The Sitcom. (Routledge).

Cassidy, Marsha F. (2001). “Cyberspace Meets Domestic Space: Personal Computers, Women’s Work, and the Gendered Territories of the Family Home.” Critical Studies in Media Communication, vol. 18, no. 1, pp. 44–65. DOI.org (Crossref), https://doi.org/10.1080/15295030109367123.

Cohl, Claudia. (October 1985). “At First the Kids Were the Cover-Up.” Family Computing, 4.

“Computer-Assisted Bookkeeping.” (September 1983). Magazine advertisement, Redbook.

Hicks, Mar. (2016). “Computer Love: Replication Social Order Through Early Computer Dating Services.” Ada: A Journal of Gender, New Media, and Technology, no. 10, https://marhicks.com/writing/Hicks_EarlyComputerDatingSystems_AdaNewMediaJournalNov2016.pdf.

Hilu, Reem. (Sept. 2023). “Calculating Couples: Computing Intimacy and 1980s Romance Software.” Camera Obscura, vol. 38, no. 2, pp. 145–71. https://doi.org/10.1215/02705346-10654941.

Hilu, Reem. (2017). The Family Circuit: Gender, Games, and Domestic Computing Culture, 1945-1990. Northwestern, PhD Dissertation.

Illouz, Eva. (1997). Consuming the Romantic Utopia: Love and the Cultural Contradictions of Capitalism. (University of California Press).

Kosinski, Robert. (1988). “Computer Use in the United States: 1984,” Current Population Reports, series p-23, no. 155, 1. 

Kosinski, Robert. (1991). “Computer Use in the United States: 1989,” Current Population Reports, series p-23, no. 171. 

Lee, Marie. (November 1985). “Computer-Friendly.” Seventeen, 58.

Mareoff, Gene. (October 1983). “Get Ready for the Computer Revolution.” Seventeen, 147-148, 162.

Mintz, Lawrence E. (1985). “Ideology in the Television Situation Comedy.” Studies in Popular Culture, vol. 8, no. 2,  pp. 42–51. JSTOR, http://www.jstor.org/stable/23412949

Newcomb, Horace M., and Paul M. Hirsch. (1983). "Television as a cultural forum: Implications for research." Quarterly Review of Film & Video 8.3: 45-55. https://doi.org/10.1080/10509208309361170

Ruberg, Bo. “Computer Dating in the Classifieds: Complicating the Cultural History of Matchmaking by Machine.” Information & Culture, vol. 57, no. 3, 2022, pp. 235–54. DOI.org (Crossref)https://doi.org/10.7560/IC57301.

Slater, Dan. (2013). Love in the Time of Algorithms. Current.

Spigel, Lynn. (1992). Make Room for TV. (University of Chicago Press).

Strimpel, Zoe. “Computer Dating in the 1970s: Dateline and the Making of the Modern British Single.” Contemporary British History, vol. 31, no. 3, 2017, pp. 319–42. DOI.org (Crossref), https://doi.org/10.1080/13619462.2017.1280401.

Turner, Fred. (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. (University of Chicago Press).

“TV Ratings.” (July 29, 1987). Los Angeles Times. https://www.proquest.com/docview/816086911/32A2A739098E408BPQ/73?accountid=15115&sourcetype=Historical%20Newspapers.

“Video Games: These Teach Too.” (November 1982). Good Housekeeping.

Weigel, Moira. (2016). Labor of Love: The Invention of Dating. Farrar. (Straus and Giroux).

 

Moretti, Myrna (February 2025). "Part of Our Lives Now: The Personal Computer on "Family Ties.” Interfaces: Essays and Reviews in Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 10-19.


About the author: Myrna Moretti is a Postdoctoral Fellow in the Faculty of Information and Media Studies at Western University in London, Canada. She holds a PhD in Screen Cultures from Northwestern University. Her work focuses on the intersections of popular culture, labour, gender, and technology. She is also a filmmaker.



“Phone hacking” made the news in the United Kingdom last year following Prince Harry’s victory against Mirror Group Newspapers, a Rupert Murdoc media property whose journalists — never shying from a sleezy scoop — exploited weak voicemail inbox PIN numbers to listen in on the lives of the rich, famous, and royal (Lawless, 2023). But the practice of telephone hacking began long before the tabloids snooped on Prince Harry. 

In the 1970s, telephone hackers in the U.S. and U.K. created an international technoculture based on exploring and exploiting the telephone system. As the popularity of electronic tinkering fused with countercultural currents, “phone phreaks” in the U.S. and “telephone enthusiasts” in the U.K. created the first circuits of what would become the “computer underground” in 1980s. 

The electrical engineer Phil Lapsley documented the emergence of the phreaking scene in Exploding the Phone (Lapsley, 2013). What is written about phreaking, however, overemphasizes the influence of phreaks in the U.S. To be sure, phone phreaks in the U.K. were certainly inspired by their U.S. counterparts. But their adoption and adaption of phone phreaking politics contributed to a global movement in turn. Phreaking, in other words, was co-constructed by tinkerers in the U.S. and U.K. Focusing exclusively on the U.S. narrows the history of phone phreaking, reducing the complexities of evolving practices and political commitments into a geographically and technologically overdetermined episode in the history of U.S. hacker culture. The history of “telephone enthusiasm” in the U.K., as many on the other side of the Atlantic described their activities, is an important and understudied story in the rise tech-savvy turns from countercultures to cybercultures at the close of the twentieth century (Turner, 2010). As we will see, its shoots in the U.K. grew well. 

In the winter of 1973, a series of articles revealed that the British Post Office, which operated the country’s telephone system, had been defrauded from within and without. Inside the Post Office (PO), an unknown number of employees tinkered with at least 75 telephone exchanges in Britain to allow for illegal — and free — long-distance calls. In at least 28 exchanges, employees even installed new circuits — or telephone “fiddles,” as they were called — that the PO’s Investigation Branch estimated cost the organization at least £1.75 millions per year, or nearly £18.5 million and $23 million today, adjusted for inflation (“Free-phone racket inside Post Office,” 1). At Bath University, more than 2,000 students could use the loophole to make free calls anywhere in the world (AP, 1973).

If internal investigations exposed how the PO’s own employees manipulated long-distance dialing from within, they also revealed that “phone phreaks” well beyond the government payroll also exploited the telephone system. Around the same time, nine students at Bath University were brought before magistrates on charges of “dishonestly using Post Office electricity.” The courts leveled this opaque charge at the students because they had used unique codes to dial directly into the long-distance trunk lines at a nearby exchange (“Free-phone racket inside Post Office,” 1).

While threats seemingly appeared from within and without, they were two sides of the same coin — the automation of telephony in the mid-twentieth century was rife with opportunities to exploit the system. Employees ranging from executives to secretaries, and a fair share of students, took advantage of techniques and technologies allowing for free calls. Information about access was shared in the process, such that one of the Bath students arrested for “phreaking” in 1973, according to the Sunday Times, has a list of the 75 exchanges where “internal ‘fiddles’ had been located by fellow ‘phreaks.’”

The motives of phreaks varied from PO employees to adventurous students. Whether phreaks were penny-pinching adults cutting corners during the economic shocks of the Energy Crisis or students exploring the technological system as a means of exploring society — or retreating from it — made no difference to the PO, however. The incidents of 1973 marked nothing less than a “serious national problem” reflecting “nationwide” practiced of telephone fraud, according to one spokesman at the time (“Free-phone racket inside Post Office,” 1). But what made “phreaking” technologically possible? 

women at bell system telephone switchboard ca 1940s
Figure 1: Women at Bell System telephone switchboard ca 1940s.

In both the U.S. and U.K. at the middle of the twentieth century, each country’s telephone system was managed by a monopoly. American telephony was operated and maintained by corporations under the umbrella of AT&T, whose actives included research (Bell Labs), production (Western Electric), and maintenance (regional companies or “Baby Bells”). Together, these institutions constituted the “Bell System.” The Bell System emerged as a monopoly in American telephony dedicated to “universal service” following a 1913 agreement with the U.S. government, known as the “Kingsbury Commitment,” which paused antitrust action against AT&T. Around the same time, in 1912, the British telephone system was monopolized when the General Post Office took over the National Telephone Company, becoming the BT Group or British Telecom until rebranding as Post Office Telecommunications in 1969. 

As telephone traffic swelled in postwar American and Britain, executives within each telephone system sought to automate the dialing process to streamline and save on labor. For the first half of the twenty-first century, telephone companies were represented by their operators. Thousands of employees, mostly women, who would answer and connect callers to whomever they were trying to reach (Grier, 2005; Light, 1999; Lipartito, 1994). Executives at AT&T and engineers at Bell Labs realized that it would simply be impossible for a network stitched together with human switches to meet Americans’ growing demand to talk on the telephone. Automation, they decided, would chart the path to a more robust telephone network. And following the invention of the transistor in Bell Labs during the 1940s, automated dialing was suddenly possible. 

Starting in the 1950s and culminating in 1960, the American telephone system had deployed a machine replacement to the human operator known as the #4A: an automated switching machine that could route long-distance calls across the country and world. To behold the #4A was to witness rows and rows of bulky black or gray cabinets, nearly the size of a city block, each a mass of wiring, switches, and hundreds of five-by-ten-inch steel cards with 181 holes. When someone called in, the #4A, omitting a low hum of machinery in motion, reconfigured its parts so that it recognized the digits of the dialed number and determined the optimal paths to route a call to its destination. Thus fired the synapses of what historian Phil Lapsely has described as a “mechanized brain,” and for its time, the “largest machine in the world” (Lapsley, 44, 46-47).

The British telephone system lagged behind its trans-Atlantic counterpart, but by the 1960s, the PO was making similar changes to its system. With the introduction of subscriber trunk dialing (STD), which identified locales with area codes, a caller could place long distance calls without the help of an operator. The long-distance network was automated with the Strowger (or step-by-step) electromechanically switching system. The PO had installed these new switched in local exchanges, but only brought them to the long-distance network in 1958. To commemorate the achievement, Queen Elizabeth the II made the first such call from Bristol to Edinburgh (“International Subscriber Trunk Dialing Introduced,” 2017).

Multi-frequency signals were the electronic keys to the newly automated telephone systems in both countries. Signaling protocols first developed by engineers in the Bell System — technically called dual-tone multi-frequency signaling (DTMF) in the U.S. and MF4 in the U.K. — disrupted loop disconnect or pulse dialing systems. In time, the rotary dial’s circular layout of 10 digits, used by pulling a finger wheel to produce electrical pulses in the telephone system, gave way to a “touch tone” keypad with 12 push buttons, each button representing distinct audio frequencies (Fagen, 1975; Joel Jr., 1982). When combined, the new telephone’s push buttons created a new multi-frequency system that increased the speed and cost effectiveness of long-distance calls. 

Executives and engineers in both the U.S. and U.K. systems were proud of their achievements and described them openly in print. In Popular Mechanics, Bell published an advertisement likening long-distance automatic dialing to “playing a tune for a telephone number” on a “musical keyboard,” with each “key” corresponding to specific digital tones (“Playing a Tune for a Telephone Number,” 1950). Bell released educational materials like the film “Speeding Speech,” which described the technical operations of the system at length, even recording the exact tone sequences for each key (“Speeding Speech,” 1950). And its flagship scientific journal, the Bell System Technical Journal, published a nearly step-by-step guide to using the new digital frequencies to start and end long-distance telephone calls (Weaver and Newell, 1954; Breen and Dahlbom, 1960). In the U.K., J. Atkinson’s detailed two-volume investigation of the PO’s network and articles within the Post Office Electrical Engineers Journal provided similar information (Atkinson, 1947). To anyone paying close attention, the electronic keys to the telephone systems of both countries were ready for the taking. 

Blue_Box_at_the_Powerhouse_Museum
Figure 2: A "blue box" or "bleeper," as British phreaks called it, at the Powerhouse Museum. From Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Blue_Box_at_the_Powerhouse_Museum.jpg.

While Bell and the PO updated their telephone systems, an international community of tinkerers, hobbyists, and pranksters shared information about how to exploit both systems. As in the U.S., the phone phreaks in the U.K. were mostly young, white, and educated men. In 1972, for example, when PO investigators raided a flat in London, they uncovered a group of young men with telephone equipment, printouts of proprietary PO codes, and multifrequency devices for making free calls. Of the nineteen arrested, many were in the 20s with pedigrees from Oxford and Cambridge. In the U.S. and U.K. alike, technical hobbies were a proving ground for young men. Phreaking was hardly different. As the Sunday Times reported, phreaking was “broadly, the outwitting of the telephone system by private ingenuity” (“Free-phone racket inside Post Office,” 1). But the practice was hardly confined to communities of nerdy students who took a liking to electronics. 

Phone phreaks themselves narrated the spread of phreaking. In 1972, a columnist at Undercurrents, a U.K. magazine offering “alternative science and technology,” described how the “telephone ripoff game is growing.” As telephone systems in the U.S. and U.K. automated in the 1960s and 70s, replacing operators with giant machines and high frequency tones, telephone users learned how to coopt the system to make free calls. Communities of “phone phreaks” emerged in the U.S., even though, in Britain, the author advised, phreaks preferred “the polite term ‘telephone enthusiast’” (“Confessions of a Phone Phreak,” 15). In the 1970s and 80s, phreaks created trans-Atlantic networks of exchange where texts, zines, and technical blueprints circulated. Although telephone technologies differed across the two countries, as did the regimes governing their use, magazines like Undercurrents created a global counterculture rooted in experiments with emerging technology, especially telephony. 

Back in ’72, what transfixed the Undercurrents columnist were schematics for the “mute box” and “blue box,” which allowed their users to receive calls without charges and imitate the control signals governing the telephone system, respectively. Both boxes could be constructed with basic parts, including capacitators, resistors, switches, and oscillators commonly found in consumer electronics storefronts in both countries. U.K. phreaks copied and shared reporting on the handheld device in Ron Rosenbaum’s 1971 article, “Secrets of the Little Blue Box” and learned from how-to-build-it guides in zines published on the American New Left magazine, like Ramparts and the Youth International Party Line (Rosenbaum, 1972). Phreaks in England U.K. readers copied, modified and reproduced guides to building and using both boxes over the summer of ’72 (Richardson, 2017). Officials within the U.K.’s Post Office and its investigations division, which managed the telephone system, were infuriated and reportedly tried to suppress reprints. Still, the writer in Undercurrents advised, “[I]t should not be difficult for the eager would-be phreak to get hold of one” (“It’s so cheap to phone your friends…,” 5). 

If some phone phreaks got into the practice to make free calls, others did so to make powerful political statements. Phreaks in the U.S. and U.K. alike saw the telephone systems as extensions of the government. As the social and political movements across the globe took opposition to “big brother,” “the man,” and “the systems” each represented in the 1960s and 70s — especially during the Vietnam War and following the Watergate scandal — technologies, infrastructures, and corporations became a target for activists seeking political change. In the U.S., for example, a primary target of New Left provocateurs was AT&T and the Bell System. In this light, the New Left’s tech-savvy oppositional politics, and the practice of phone phreaking, took on a cult-like quality in politics and counterculture. 

The countercultural politics of phreaking was sometimes more pronounced in the U.S. Critiques of the Bell System in YIPL, Fifth Estate, and Borrowed Times and other New Left publications attracted readers for their sweeping rhetorical and explanatory force and a compelling imagery to match. In these pages, telephone lines were anatomized as the central nervous system in the body electric of an evil American empire, the tools of discipline and punishment for monopoly power, and the oppressive molding that clamped upon Americans through corporate culture and surveillance apparatuses alike. 

While phreaking the U.S. drew from the countercultural influences of the 1960s, phreaking in the U.K. drew from its own counterculture, namely, the “alternative technology” movement, which advocated for alternatives to consumer goods, factory production, fossil fuels, and industrial farming. Within the pages of Undercurrents, and a small think tank called the Center for Alternative Technology in Wales, alternative technologists covered everything from guides for reader construction of windmills or solar farms at home to rough blueprints of “self-organizing, ecologically viable” communities abroad or in rural settings in the U.K.—wherever, in short, authors thought new approaches to living might flourish slightly out of reach from the centers of industrialized, electrified, and, increasingly, computerized society. 

Telecommunications technologies and phreaking, as in the U.S., were seen as tools of rebelling from British society. Telephone tinkering was still described as one technique or tool of “countertechnology” for resisting state and corporate surveillance (“The Snoopers and the Peepers,” 10). And magazine covers cast the “liberation of communications” as the key to keeping “big brother,” epitomized as King Kong, in check (Undercurrents no. 7, July-August 1974, cover page). 

512px-HackLabCalaFou
Figure 3: Hack Lab Graffiti art featuring noted American phone phreak Cap’n Crunch, or John Draper (left), in Catalonia, a province of Barcelona, Spain, October 2012.

Phreaks from the U.S. and U.K. corresponded, circulating technical information and techniques. Political ideas, however, increasingly defined the trans-Atlantic community of phreaks in the 1970s. In a “Report from Merrie Olde England,” published in the American phone phreak magazine TAP in the spring of 1977, a phreak going by the pseudonym “Depravo the Rat” reported on the English phreak scene. He described differences for pay phone tinkering, credit card fraud, and multifrequency devices. The author even detailed the rise and fall of the U.K.’s most famous phreak, Duncan Campbell — the equivalent of the famed American phreaker, John Draper or “Captain Crunch” — who wrote publicly on phreaking throughout the decade (Rat, 1).

Not all phone phreaking politics aligned with left-leaning critiques of society. Ultimately, the report from England closed on a downbeat note: “The world is coming to an end, or very near it, beginning 1982, through to 1984.” The writer warned of nuclear and biochemical warfare followed by “total economic chaos and starvation.” Citing literature from Isacc Asimov’s Foundation trilogy to apocalyptic films depicting population collapse, like Soylent Green, and libertarian essays in TAP — “It’s exactly what I’m into politically,” he wrote — Depravo the Rat argued England was “further down the road to collapse than the U.S.” All a phreak could do was “Eat, Drink, and Be Merry…” (Rat, 2).

The world did not end in 1984, but the technological systems phone phreaks explored did. The telephone infrastructures of the U.K. and U.S. were both privatized in the 1980s. Advances in computing, portable cellular devices, and satellite technologies would further disrupt the phreaks practice — providing new challenges. Legal regimes changed, too, especially with the 1984 Computer Fraud and Abuse Act in the U.S. and the 1990 Computer Misuse Act in the U.K.

But an international community of phone phreaks only grew with the rise of computer hacking. The exchanges between the U.S.-U.K. are an important puzzle piece in assembling the global history of phreaking. Understanding how German, Indian, South American, and African telephony was explored and altered by curious tinkerers and committed fraudsters alike is necessary for a truly global history of phreaking that has yet to be written. 


Bibliography

“A Special Issue Dedicated to the Liberation of Communications,” Undercurrents (July-August 1974), https://issuu.com/undercurrents1972/docs/uc07_jan20a (Accessed in August 2024).

 American Telephone and Telegraph Co. (Feb. 1950). “Playing a Tune for a Telephone Number.” Popular Electronics.

“Speeding Speech.” (1950). American Telephone and Telegraph Co.

Associate Press. (Jan. 1973). “Phantom phone fraud fanatic befuddles Great Britain’s finest.” Chicago Tribune, 8, p. 1.

Atkinson, J. (1947). Telephony: A Detailed Exposition of the Telephone Exchange Systems of the British Post Office. Vol. I. (London). Internet Archivehttps://archive.org/details/dli.ernet.288583/page/n7/mode/2up. Accessed Sept. 2024.

Breen, C., and C.A. Dahlbom. (Nov. 1960). “Signaling Systems for Control of Telephone Switching.” Bell System Technical Journal, vol. 39, no. 6, pp. 1381-1444.

Depravo the Rat. (Mar.-Apr. 1977). “Report from Merrie Olde England.” TAP, no. 43. 

Fagen, M.D., ed. (1975). A History of Engineering and Science in the Bell System: The Early Years, 1875-1925. (New York: Bell Telephone Laboratories).

Grier, David Alan. (2005). When Computers Were Human. (Princeton: Princeton University Press). 

Hanlon, Joseph. (17 July 1975). “The Telephone Tells All.” New Scientist, pp. 148-151.

“International Subscriber Trunk Dialing Introduced.” (10 Mar. 2017). Telegraph, https://web.archive.org/web/20170311181802/https://www.telegraph.co.uk/technology/connecting-britain/international-subscriber-trunk-dialling-introduced/. Accessed Sept. 2024.

Joel, A.E., Jr., ed. (1982). A History of Engineering and Science in the Bell System: Switching Technology, 1925-1975. (New York: Bell Telephone Laboratories).

Lapsley, Phil. (2013). Exploding the Phone: The Untold Story of the Teenagers and Outlaws Who Hacked Ma Bell. (New York: Grove/Atlantic Press).

Lawless, Jill. (Dec. 2023). “Prince Harry’s phone hacking victory is a landmark in the long saga of British tabloid misconduct.” Associate Press, 15.

Light, Jennifer S. (1999). “When Computers Were Women.” Technology and Culture 40, no. 3: 455–83. 

Lipartito, Kenneth. (October 1994). “When Women Were Switches: Technology, Work, and Gender in the Telephone Industry, 1890–1920.” The American Historical Review 99, no. 4: 1075–1111. 

“The Snoopers and the Peepers.” (Sept. 1974). Undercurrents, no. 7, p. 10.

“Confessions of a Phone Phreak.” (July-Aug. 1974). Undercurrents, no. 7, p. 15, https://issuu.com/undercurrents1972/docs/uc07_jan20a. Accessed Aug. 2024.

Richardson, Peter. (2010). A Bomb in Every Issue: How the Short, Unruly Life of Ramparts Changed America. (New York: The New Press).

Rosenbaum, Ron. (Oct. 1971). “Secrets of the Little Blue Box.” Esquire.

“Free-phone racket inside Post Office.” (21 Jan. 1973). Sunday Times of London, p. 1.

Turner, Fred. (2010). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. (University of Chicago Press).

Weaver, A., and N.A. Newell. (Nov. 1954). “In-Band Single-Frequency Signaling.” Bell System Technical Journal, vol. 33, no. 6, pp. 1309-1330.

“It’s so cheap to phone your friends…” (Nov. 1972). Undercurrents, no. 3, Autumn/Winter, p. 5, https://issuu.com/undercurrents1972/docs/uc03_jan19b. Accessed Aug. 2024.

 

Bruggeman, Jacob A. (January 2025). “Phreaking the U.K.” Interfaces: Essays and Reviews on Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 1-9.


About the author: Jacob Bruggeman is a PhD candidate in history at Johns Hopkins University, where he studies modern political economy and intellectual history with a focus on technology and policy in the twentieth century U.S. His dissertation explores how regulation, professionalization, and technological change reshaped the practice and significance of “hacking” in the 20th century. Jacob’s work has been supported by the Association of Computer Machinery, the Hagley Museum and Library, and the Charles Babbage Institute.