Interfaces
Essays and Reviews in Computing and Culture

Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences.
Co-Editors-in-Chief: Jeffrey R. Yost and Amanda Wick
Managing Editor: Melissa J. Dargay
The Business History Conference (BHC), now more than seventy years old, continues to thrive as exemplified by a tremendously engaging meeting in Atlanta March 13-15, 2025. BHC’s program has evolved in diverse and wonderful ways in recent years and decades to include and embrace ever more labor, gender, race, environmental, and cultural and intellectual history of businesses, organizations, enterprises, and political economy.

A Business History Oral History Roundtable
The Charles Babbage Institute for Computing, Information and Culture had a strong representation at this year's BHC, given our past and current fellows (from our doctoral fellows to our mid-career and up “research fellows”), and I was delighted to participate as well. On Friday, March 14, I had the honor of being part of a terrific Roundtable on oral history in business history, entitled “Oral Business History: Recent Approaches.” Standout business historian, University of Florida’s Dr. Paula de La Cruz Fernandez, author of the excellent book Gendered Capitalism, organized the Roundtable. In addition to conceptualizing the panel and convening the group, she presented her own oral history work, which fascinatingly focuses on immigrant entrepreneurs, and labor and gender business history in the State of Florida.
I had the opportunity to talk about gender history of business, technology, and work as part of my own recent projects, as well as other efforts, past and present, at CBI. I stressed how gender studies and women’s history of computing and software have been a priority of my research as Director, that of my predecessor Thomas Misa, and of many of our research fellows. This has included CBI oral histories, sponsored projects, and publication efforts on women programmers (Misa), and my own oral history, sponsored projects, and writing projects (including my 2017 MIT Press book Making IT Work) on women software and services entrepreneurs, and women pioneers in cybersecurity and privacy. I highlighted how fundamental oral history is to generating collection development opportunities for the CBI Archives in general, and in securing important archival collections of women pioneers in computing, software, and services. In addition to our interest in women’s achievements and experiences in rising to leadership in computing, software, entrepreneurship, and business, we are also very interested in documenting and analyzing women’s labor and work history in programming, manufacturing, engineering, and data processing, including and especially barriers, discrimination, resistance and agency, and gendered environments.
Along with the honor of presenting my and CBI’s work alongside Paula’s, I was honored to be in the company of the rest of the highly distinguished panel. This, of course, includes leading business historian Harvard Business School’s (HBS) Geoffrey Jones. Geoffrey Jones, a past President of BHC, co-leads Harvard Business School’s Creating Emerging Markets, a wondrous, continuing project, now more than a half dozen years old, which focuses on the Global South. (I encourage everyone to read Geoffrey Jones’ and Taarun Khanna’s engaging and important oral history book, which grew out of the project, Leadership to Last: How Great Leaders Leave Legacies Behind.)
About a year ago, I had the opportunity to attend Harvard Business School’s “Oral History and Business in the Global South Workshop,” following which I published a lengthy review essay on this path breaking special event in Interfaces: Essays and Reviews in Computing and Culture.

One especially interesting element of the project and Jones’ BHC presentation was that in addition to the incredible scholars conducting these interviews, and the impactful entrepreneurs being interviewed (more than two-hundred interviews to date), HBS is utilizing artificial intelligence to create content retrieval and generation augmentation tools drawing from this oral history transcript database. The oral histories, incredibly rich resources on multiple regions of the Global South, are professionally video recorded. Not surprisingly, many of the interviews have achieved substantial classroom as well as research use. It is likely that the new AI tools, carefully controlled, drawing from smaller, limited datasets (the project’s oral history database), will aid researchers’ and educators’ use of this valuable resource.
Other distinguished participants on the panel included University of Florida’s Sean Patrick Adams, Hagley Museum’s Benjamin Spohn, African Business School’s Laurent Bedneau-Wang, and University of Virginia’s Olivia Paschal. I found Paschal’s discussion of her oral history work interviewing past employees of the giant Arkansas-headquartered companies (including Walmart and Perdue Farms) particularly intriguing. Paschal stressed the outsized impact these firms have had on labor, culture, and life in the relative rural state of Arkansas.
By design, there was a lengthy question and answer period, and this conversation was especially rich in content. Among the topics discussed was business history and memory. From the audience, Florida International University Professor of History and CBI Research Fellow Kenneth Lipartito expressed how wonderful it would be to have a future BHC panel on business history and memory or business oral history and memory. Excitingly, preliminary planning by participants in this year’s oral history panel is already in the works for such a proposal for next year’s BHC in London.

Exploring Surveillance and Political Economy
In an important session on “Surveillance at Work” and corporate control, held on Friday afternoon, Lipartito presented a paper entitled, “The Chandlerian Panopticon: Surveilling Workers and Managers in American Railroads, 1850-1890.” Among the papers in this session were two presentations related to the telecommunication and computing industries. Princeton’s Bianca Centrone discussed housing at Italian firm Olivetti, and University of New Hampshire’s Josh Lauer gave a paper entitled, “Disciplining Telephone Users: Telephone Talk and Instrumentation of Personal Communication in the United States, 1880-1920.”
Half a decade ago, Lipartito and Lauer teamed to give one of our six deeply insightful keynotes at CBI’s two-day Just Code symposium. Shortly thereafter, they published their edited volume Surveillance Capitalism in America, a tremendous book extending from a Hagley Workshop, and published in the museum and library’s associated book series, Hagley Perspectives on Business History and Culture, with University of Pennsylvania Press (2021).
Opposite our oral history panel session, Anne McCants of MIT was Chair and Discussant of a session entitled “Construction, Collaboration, and Collectiveness: Public and Private Partnerships in Modern Business History.” One of the papers I especially missed seeing in this session was by Smithsonian Curator of Computing and past CBI Tomash Fellow Dr. Andrew Meade McGee. Andrew presented, “High Technology Aerospace Entrepreneur Versus the Expert Labor of the Federal Workforce: Roy Ash and Nixon-era Conglomerate Approaches for Federal Government Regulation.”

Big Blue and Labor History
On Saturday, there was an excellent panel “Working at IBM” that included papers by past CBI Tomash Fellow and Rochester Institute of Technology Professor of History Corinna Schlombs and CBI Senior Research Fellow James (Jim) Cortada.
Cortada spoke on IBM’s compensation to employees and lifetime employment in exchange for foregoing unionization, its famed “grand bargain.” It was a bargain that held strong for half a century plus before the corporation broke the deal when it hit more difficult competition in recent decades, and it started large-scale layoffs in the U.S. and fast growth in India and some other lower cost labor countries. Jim’s presentation followed that of Professor of History at The Ohio State University David L. Stebenne’s discussion of the early history and contexts of the “grand bargain,” in IBM’s formative years through to the 1960s. With this common theme, the two talks worked very well together.
Corinna Schlombs’ presentation, quite importantly, provided a different angle on the labor and work history at IBM to round out this fantastic session. She concentrated on key punchers and key entry operators in punch cards and digital computing. Her paper, entitled “Data Entry Challenges: IBM Work and Technological Change,” provided an especially compelling analysis of women and gender history at IBM in data entry, which, despite the existence of a large IBM historiography overall on this storied corporation, has been wholly ignored previously. Corinna’s BHC paper is research out of her larger labor and gender in information technology project funded by the National Science Foundation, one in which she has conducted research at CBI. We are also delighted that Corinna plans to donate her oral history interviews on the project to CBI. University of Georgia Tech Professor Emeritus, Past President of the Society for the History of Technology, and longtime friend to CBI Steven Usselman chaired and served as the discussant for the session, offering great insight on the political economy and antitrust context of IBM.
There was substantial and impressive content on business and automation, including a session on Saturday simply titled, “Automation.” This session, chaired by University of Maryland’s David Kirsch, with the discussant MIT’s Ellan Spero, ranged from papers looking at the concept broadly, like University of Southern California’s Salem Elzway’s “Automation: The Past, Present, and Possible Future of a Concept,” to explorations of automation in factory and office settings in particular companies, industries, or endeavors. With the latter, recent CBI Tomash Fellow, Princeton University’s, and the Institute for Advanced Study’s Dr. Alex Reiss-Sorokin presented an important paper entitled, “The Computer in the Law Firm: The Early Automation of Legal Research Work, 1964-1970.”
Norberg Grantees
Recent CBI Norberg Travel Grant recipients (who conducted research in the CBI Archives—an unparalleled set of 320 collections spanning information technology and the digital world) also gave papers on the BHC 2025 Program. This included Harvard’s Mark Aidinoff, Columbia’s Ella Coon, and Johns Hopkins University's Jacob Bruggeman, with their compelling historical research on technological federalism, electronic assembly in Korea, and hiring hackers, respectively. Newly awarded Norberg grantee, Ethan Dunn of Rutgers was also on the program, presenting on the American Bankers Association. Ethan will be drawing from our rich materials in the Burroughs Corporate Records and other CBI collections with materials on banking in archival research he will conduct later this year.
* * *
As usual at the Business History Conference, there were multiple strong plenary sessions. In addition to those with talks by or panels of senior scholars, The Kross Dissertation Prize Plenary Session is always a major highlight. One of the elements I love most about the BHC is that it is small enough (a few hundred scholars) to have a very inviting culture, including and especially to doctoral students and junior scholars, while being large enough to meet and network with new contacts at all career stages who are doing fascinating research.
Bibliography
“Charles Babbage Institute for Computing, Information and Culture Oral History Program and Resources.” CBI Oral Histories (umn.edu)
Cortada, James W. (2019). IBM: The Rise, Fall, and Reinvention of a Global Icon (MIT Press).
“Creating Emerging Markets,” Harvard Business School. “Creating Emerging Markets” - Harvard Business School (hbs.edu)
de la Cruz-Fernandez, Paula. (2021). Gendered Capitalism: Sewing Machines and Multinational Business in Spain and Mexico, 1850-1940 (Routledge).
Jones, Geoffrey, and Tarun Khanna. (2022). Leadership to Last: How Great Leaders Leave Legacy’s Behind. (Penguin Business).
Lauer, Josh, and Kenneth Lipartito. (2021). Surveillance Capitalism in America (University of Pennsylvania Press).
Misa, Thomas J., ed. (2010). Gender Codes: Why Women Are Leaving Computing (Wiley/IEEE).
Schlombs, Corinna. (2019). Productivity Machines: German Appropriation of American Technology, from Mass Production to Automation (MIT Press).
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry (MIT Press).
Yost, Jeffrey R. (2024). “Harvard Business School’s ‘Oral History and Business in the Global South’: A Review Essay and Reflection.” Interfaces: Essays and Reviews in Computing and Culture v.4, https://cse.umn.edu/cbi/interfaces#Harvard. [Third down scrolling]
Yost, Jeffrey R. and Gerardo Con Diaz, eds. (2025, forthcoming in Sept.). Just Code: Power, Inequality, and the Political Economy of IT (Johns Hopkins University Press).
Yost, Jeffrey R (April 2025). “Business History Conference 2025, and CBI’s Participation” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 20-26.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press and is PI of the new CBI NSF grant "Mining a Useful Past: Perspectives, Paradoxes and Possibilities in Security and Privacy." He is author of Making IT Work: A History of the Computer Services Industry (MIT Press), as well as seven other books, dozens of articles, and has led or co-led ten sponsored history projects, for NSF, Sloan, DOE, ACM, IBM etc., totaling more than $2.3 million, and conducted/published hundreds of oral histories. He serves on committees for NAE, ACM, and IEEE, and on multiple journal editorial boards.

The beginning of the 1987 Family Ties episode “Matchmaker” starts, as many of them do, in the Keaton family kitchen. Youngest sister Jennifer (Tina Yothers) is sitting at the kitchen table working on a computer. To habitual viewers of the show the computer on the kitchen table is a visible disruption to the typical mise-en-scène. Dad Steven (Michael Gross) strides over frowning, “Jennifer, I told you I don’t want that computer in the kitchen.” Eldest son Alex (Michael J. Fox) looks up impishly, “Dad, computers are part of our lives now, join the 80s [audience laughter] ... join the 70s. [louder audience laughter]” This scene sets up the typical tension of a Family Ties episode. Generational gaps between white hippie parents Steven and Elyse (Meredith Baxter) and their Reagan-era kids Alex, Mallory (Justine Bateman), and Jennifer (and later Andy) create conflict, humour, and ultimately opportunities for family communication and resolution—all in the network required twenty-two minutes.
Family Ties was one of the most popular television programs of the 1980s. Across its seven-year run (1982-1989), it deftly wove relevant issues like class, alcoholism, and teen sexuality into its sitcom formula. It is not remembered as a technological archive. While there is significant scholarship on more contemporary digital technology and everyday life, this episode is significant because it serves as an example of early, and pre-networked, digital history of everyday life in popular culture. While science fiction films or hobbyist publications are often the more obvious sites for studies of early digital history, numerous 1980s films and television episodes about everyday life featured computers including Cheers, Roseanne, thirtysomething, Pretty in Pink, and Working Girl to name a few. Receiving 22.3% of the audience and ranking first when this episode aired in July 1987, Family Ties was particularly prevalent in the American cultural landscape (“TV Ratings”). Indeed, these viewership numbers outpaced the 8% of American homes that actually had a computer in 1984 and the 15% that owned one by 1989 (Kosinski).
As Bo Ruberg argues about computer dating ads in the personals columns of the 1960s and 1970s, episodes like “Matchmaker” similarly offered computer engagement to just as many, if not more, people than actual computers did in the 1980s. The episode imagines how each member of the family might interface with the computer. With only 15% of American households owning a computer by the end of the 1980s, the narrative is in some ways aspirational. This brief essay evaluates the overlapping storylines about domestic computer use and positions them within larger technological and social economies—particularly in relation to gender. By turning to a moment when many people were still only anticipating the potential personal uses of computers, my analysis of popular culture as a site of digital history emphasizes that non-technical narratives are crucial vectors that shape understandings of technology and their use in American society.

Family Computing
After the initial kitchen interaction, Steven agrees that Jennifer needs the computer for school but asks her to use it in another room. She acquiesces and, gathering up the various components, moves to the living room. Steven runs behind, “Oh not the living room, there’s already an electric clock in the living room [audience laughter].” In this plot line, Jennifer and her parents negotiate the computer’s place and purpose in the home. This intergenerational narrative responds to and anticipates the growing interest in domestic computing culture.
Although early personal computers were overwhelmingly associated with men, by the 1980s computer companies, popular media, and even educational institutions were actively trying to change that perception. Magazines from Family Computing to Redbook touted the computer as the technology for both domestic and professional organization and efficiency. As early as 1983, ads in non-technical women’s magazines consistently listed courses for learning “Computer-Assisted Bookkeeping” and how to “Be Your Own Computer Expert.” Television and print media also featured numerous articles and ads extolling the merits of educational software and video games for children. In 1982 Good Housekeeping ran an article titled “Video Games: These Teach Too” and the following year Atari marketed its program Sesame Street: The Children’s Computer Workshop with an emphasis on preparedness for the dawning ‘computer age’ (Atari). Geared toward a younger female audience, Seventeen also suggested the relevance of computers to girls and young women as early as 1982. An article in the October 1983 issue titled “Get Ready for the Computer Revolution” suggests the need for computer literacy for all youth (Mareoff). Two years later, “Computer-Friendly,” an anecdote sent from a reader, describes her initial fears and subsequent love for the computer—once she learned how to use it (Lee). If only Steven had been keeping up with Mallory’s Seventeens, he might have come around sooner.
The initial reason for the computer being in the home is Jennifer’s academic success—she won’t be a casualty of the computer revolution. While Jennifer is shown working independently on the computer, she also introduces her parents to computer games. With brief instruction from her daughter, Elyse hits a home run in computer baseball and smiles at the “…little computer guy patting the batter on his behind [audience laughter].” This openness to computer games is in stark distinction to a storyline from 1984 where Elyse almost quits her new job due to frustration with the computing expectations of her office. Counter to the notion of ‘computer widows’ or the ‘technically illiterate’ women of past generations, this episode offers a feminist representation of home computer use that is employed not only for academic success but also for mother/daughter recreation (Hilu, Family 203-204; Spigel 116).
Steven initially opposes the 'silly computer games” even if he can appreciate the computer’s use for education. As Jennifer and Elyse attempt to draw him in he reluctantly agrees, “Okay, one pitch. Just to prove how dehumanizing this game is.” Hitting the keys awkwardly, he attempts and fails three swings. Jennifer monotones, “Strike three, you’re out Dad.” Steven indignantly sits down at the computer and begins hitting the sides with frustration, “What’s wrong with this stupid computer ump [audience titters].” Both Elyse and Jennifer admonish him to calm down. He takes several deep breaths and leans back. He gestures towards the computer as he says, “See how dehumanized I became [audience laughter].” Although this point of view was supposedly laughable by 1987, Fred Turner suggests that it was a legitimate viewpoint just a generation earlier. Many in the counterculture of the 1960s—of which Steven and Elyse are clear representatives—viewed the computer as a potentially oppressive force. As Turner writes, “transformation of the self into data on an IBM card marked the height of dehumanization” (16). However, computer developers through the 1980s, and particularly into the dotcom boom of the 1990s, viewed their technology in very different terms. Growing from the New Communalists’ and Stewart Brand’s sensibility in the Whole Earth Catalog, they positioned the computer in line with counter-culture ideals of democratization and free speech—a technology that could deliver on creative individualism and collaborative sociability (Turner 9-16). Although Steven and Elyse don’t have such an explicit ideological shift, their evolving perspective imagines how former 1960s hippies could move beyond their initial distrust to join the computer age.
The editor’s letter in the October 1985 issue of Family Computing is titled “At first the kids were a cover-up.” The magazine’s editor, Claudia Cohl, goes on to discuss how parents were initially purchasing computers for their children, but as the decade continued, they had “come out of the closet” about wanting to learn how to use one themselves. Although the computer entered the home as an educational device, the episode depicts Steven and Elyse as increasingly enjoying computer sports games together. While romance and sex software for couples did exist in the 1980s, this episode imagines that even seemingly non-romantic computer games could also promote renewed intimacy for long term couples (Hilu, “Calculating” 153-154). As they are playing computer basketball, Jennifer comes into the kitchen and unplugs it, it’s time for the computer to be returned. Elyse suggests to Steven that “You could always put aside your personal feelings and buy her one.” He responds, “You’re right, I should think about her needs first.” They run after Jennifer [audience laughter]. Steven and Elyse are still in a thinly veiled computer closet but the doubled promise of shared romantic leisure and preparing a child for the ‘computer revolution’ ultimately draws them out.

Alex P. Keaton’s Guide to Computer Dating
The reveal that the computer has been rented for a week and not purchased, in addition to the ongoing debates about where in the Keaton home it should be operated, further signals the larger uncertainty about computers as a domestic technology. As personal computer sales boomed through the 1980s and into the 1990s, who should be using the computer and to what end was a common domestic negotiation. The debates between Steven and Elyse and the varied activities of computer games and Jennifer’s education already reveal the computer’s dynamic function within the household (Cassidy). The second storyline is about the computer as a matchmaking tool, hence the episode’s title. “Matchmaker” is one of many 1980s electronic dating storylines on television programs including Diff’rent Strokes, Three’s Company, The Facts of Life, and 227. Although Mallory dates a variety of men over the show’s run, this episode is notable for Alex and the computer’s involvement in the process. Alex’s attempts and failures to control the process speak more clearly than many contemporary episodes do in regard to the history of computer dating that runs from the 1960s and extends into the 2020s. While computer dating in the form of questionnaires and punch cards had largely waned in the United States by the 1980s—not to be revived until website and app dating in the early 2000s—the prevalence of these storylines indicates a crowded cultural imaginary for electronic intimacy. As in the other storyline, this plot considers how the computer might be a part of everyday life—and what its limitations were.
In their first scene, Mallory cries to Alex that she keeps going out on lousy dates. Alex reassures her that “If you want to have a guy to date, then you should have a guy to date...and I’ll find him for you.” Alex’s ego and conservatism mean that he often misreads social situations or says something inappropriately self-aggrandizing—typically for laughs. While Alex’s offer seems to anticipate a laugh from the live studio audience, in this case, the suggestion that Alex will find Mallory an appropriate date is taken at face value. The next day, Alex has created a computer dossier of eligible young men from his university. He takes Mallory’s input on her desired type of date and sets to work creating a program designed to find the appropriate match. The episode replicates the push of early computer dating services to mitigate accusations that the services were ‘sleazy’ or just meant to facilitate sex by having Mallory tearfully proclaim that she’s “only 17” and looking for a “guy to date.” The reminder of her age and emphasis on dating rather than sex or ‘hooking up’ both appeases NBC standards and practices while reinforcing the respectability of computer dating.
While early versions of computer dating framed men as consumers and women as products, notably this episode diverges from that narrative. Mallory is in search of potential male dates. But the episode is hardly subversive, because for both computer and subsequently video dating, legitimacy was also informed by who comprised the pool of potential dates. Nascent computer dating in the 1960s was the purview of mainly white male college students with access to computer mainframes (Gilmor). This cohort of programmers were trying to make the process of meeting women more straightforward and less up to chance. Mar Hicks explains that, as in other modes of early computing, this concentrated power in a core group of white men who approached computerized dating and romance as a way “to replicate existing social patterns and hierarchies even more efficiently” (Weigel 170). Since the rise of online (both website and app) dating in the 1990s, computer dating services can aggregate significantly more data across a potentially broader array of mates through ever more sophisticated algorithms. And yet, the people setting the algorithms for matchmaking have not necessarily changed radically. Dan Slater points out that many of the owners and developers of sites like Plenty of Fish and OKCupid are “business-minded, unemotional math guys” who view dating and love as a fickle product with numerous variables. Ideally these variables can be managed through programming (3-4).
By having designed the parameters of his program to include likely white upwardly mobile men he knows from college, Alex’s ideal social order becomes even more consolidated. As Alex sets about finding Mallory a date, he creates a computer dossier of eligible young men from his university. He dismisses Mallory’s input on the important qualities in a guy—such as sensitivity and a sense of humour—and instead sets to work running the program that he feels will produce an appropriate match. Mallory’s longest running boyfriend on the show, Nick (Scott Valentine), was a motorcycling environmental artist whose gruff exterior belied his emotional maturity and kind personality. Despite these positive qualities, Nick assuredly would not have been included in Alex’s computer dating pool.
The audience never sees the computer screen in this plotline nor has insight into how potential dates are cross referenced. But this was consistent with computer dating services in real life which emphasized that a computer was involved, more so than how it actually worked. For some services of the 1960s there is skepticism that a computer was involved at all—instead serving as a marketing ploy while humans did the real matching. While today’s algorithm-based dating apps draw from more data to present potential matches, there is a similar opacity about how the algorithms function. There is significant speculation that similar to streaming and social media sites, dating apps make use of personalization data to show users potential matches based on their existing preferences (Nader 238-239; Voll 16). Audiences of computer dating advertisements and of these shows—as well as potential users of computer dating services—have thus been left to make assumptions about what the computer could conceivably accomplish for matchmaking, rather than gleaning any details. As in other computer dating television episodes, “Matchmaker” moves from Alex and Mallory inputting options into the computer to the night of the date.
Before the date arrives, Alex espouses the astonishing merits of computer matchmaking. And yet, his comments blur the line between his role and the computer’s in the matchmaking. Alex notes that he has “…handpicked Mallory’s date…one of the most eligible men at Leland College.” But then goes on to say, “I fed your vital statistics into this computer [he pats the top of the computer Jennifer is attempting to work on] and I found out you guys are compatible in everything from breakfast cereal to positions on nuclear disarmament.” The episode asks the audience to imagine that the computer exceeds human capabilities. And yet, Alex’s statements trouble the narrative of omniscient computers by re-asserting the central role of the programmers.
As Mallory’s date Roger (Bill Allen) arrives, it is quickly clear that the two have no chemistry. And yet, Alex continues to try to insert himself into their relationship in the hopes that he can somehow facilitate a successful romantic connection. Playing into the cultural stereotype that programmers lack interpersonal facility, Alex sits between them on the couch with an arm over each as they introduce themselves. Jennifer quips, “Is Alex their translator?” By having the entire family present for this initial encounter, it again limits any potential ‘sleaze’ factor that this iteration of computer dating might have. Despite the lack of connection between Mallory and Roger, Alex convinces Mallory to go on a second date so that Alex can double date with them. But Alex’s attempts to extend the computer match and facilitate a romantic connection end up alienating not only Mallory and Roger, but his date as well. Alex’s insistent involvement not only calls into question how effective computer dating is but also opens the possibility that only those who lack social skill would encourage it in the first place.
Consistent with the same cadence as the other computer dating episodes, this storyline concludes with a post-mortem scene back at the house that offers a discussion of the limitations of computer dating—and meets the sitcom narrative requirement for re-establishing the status quo. As Ruberg has noted about personals advertisements for computer dating, these episodes helped a public audience imagine how a computer might “enter into the everyday personal lives of its users.” As this scene shows, the episodes all seem dubious of computer dating, but not of computers themselves.
Back at the Keaton house, Alex complains to Mallory, “I just do not get it, why didn’t you two hit it off…you looked so good on paper.” The scene extends to computer dating what Eva Illouz describes as the “two conflicting sets of metaphors” about romantic relationships that increasingly emerged in the context of late capitalism. One draws on modes of consumption to characterize dating, “To create or rejuvenate spontaneity, adventure, fun,” while the other follows the “purposive rationality of the sphere of production” (188) and emphasizes hard work, rational responses, and ultimately the long-term stability of marriage (Baym 75). Similarly, the exchange positions this narrative within a longer discourse about the potentials and limits of both computers and programmers. As early as the 1960s and continuing to the present, there was a frequently ambivalent cultural narrative that the computer could process large swaths of data and yet lacked the acuity to meaningfully process or produce human affect. Similarly, computer programmers were often stereotyped as creative types who nonetheless were anti-social and/or ill suited to existing norms (Ensmenger 143-144).
Alex perceives the computer as the pinnacle of rationality, able to look past the clutter of human emotions to produce an ideal and quantifiable romantic match. Mallory points out that while the computer has produced compatible matches, it has failed to produce as Mallory puts it, “That magic, that special something has to be there. It defies logic.” Alex responds, “Now we’re getting down to it, I’m made of paper, I mean I like paper, what am I trying to say?” Mallory elaborates on his observation, “You’re trying to say that you’re not good with emotions but you’re great with facts and figures and you love a situation where you can control all the elements, and the human element is removed.” Mallory’s statement aligns the computer with the production mode of romance and as inherently ill equipped to produce the spontaneous chemistry that characterizes the consumption mode of dating. It also positions Alex within a lineage of 1960s and 1970s computer dating engineers attempting to control the elements of dating and extend their worldview through their pairings (Strimpel; Hicks). Conversely, Steven and Elyse’s growing affection for the computer helps imagine how it might add a spontaneous, fun, consumption facet into an existing relationship.
Conclusion
As one of the most popular television shows of the 1980s, Family Ties was frequently a forum for contemporary issues (Newcomb). This episode introducing the computer to the Keaton home is notably distinct from other “computer episodes” of the same decade because it attempts to address a variety of uses including education, leisure, and potential roles for forming and sustaining romantic relationships. Overall changes in sitcom relationships are often limited and slow growing because of the genre’s largely episodic narrative organization with each episode frequently following a disruption and re-establishment of the status quo (Butler 17; Mintz 42-43). In technologized matchmaking storylines, and this episode is no exception, the narrative requirements of the genre end up evoking a skeptical ambivalence about how useful or successful computers might be for romance.
Returning to the episode also offers some precursors for our present digital culture. Alex’s inept yet increasingly aggressive attempts to harness computational power to order the lives of women around him raise some obvious parallels to our current moment. Consistent with the larger paradox of the show, the episode brackets how dangerous Alex’s attitudes can be by playing them for laughs thus repeatedly insinuating that they pose no credible threat—which the menstruating people now deleting cycle-tracking apps can attest is a dangerous course (Ries). While a popular sitcom episode seems like a laughably un-technical archive, it illustrates the ongoing gendered, intergenerational, and functional negotiations that continue to shape digital technology use. As ever, popular culture offers a shared context through which to understand how computers are part of our lives—then and now. *
*Author's note: Although “Matchmaker” was broadcast in 1987 near the end of the 5th season, it was written and produced sometime during the 3rd season (1984-1985). There are no available details that I can find as to why this decision was made. To the minimal extent that Family Ties was serialized, this episode would have seemed anachronistic in its initial broadcast. However, this two-year gap does not seem to significantly impact attitudes on personal computing as I present them in this essay.
Bibliography
“Atari.” (November 1983). Advertisement, Parents, 28–29.
“Atari.” (November 15, 1983). Advertisement, Woman’s Day, 119.
Baym, Nancy K. (2015). Personal Connections in the Digital Age. 2nd ed. (Polity Press).
“Be Your Own Computer Expert.” (June 1983). Magazine advertisement, Redbook.
Butler, Jeremy G. (2019). The Sitcom. (Routledge).
Cassidy, Marsha F. (2001). “Cyberspace Meets Domestic Space: Personal Computers, Women’s Work, and the Gendered Territories of the Family Home.” Critical Studies in Media Communication, vol. 18, no. 1, pp. 44–65. DOI.org (Crossref), https://doi.org/10.1080/15295030109367123.
Cohl, Claudia. (October 1985). “At First the Kids Were the Cover-Up.” Family Computing, 4.
“Computer-Assisted Bookkeeping.” (September 1983). Magazine advertisement, Redbook.
Hicks, Mar. (2016). “Computer Love: Replication Social Order Through Early Computer Dating Services.” Ada: A Journal of Gender, New Media, and Technology, no. 10, https://marhicks.com/writing/Hicks_EarlyComputerDatingSystems_AdaNewMediaJournalNov2016.pdf.
Hilu, Reem. (Sept. 2023). “Calculating Couples: Computing Intimacy and 1980s Romance Software.” Camera Obscura, vol. 38, no. 2, pp. 145–71. https://doi.org/10.1215/02705346-10654941.
Hilu, Reem. (2017). The Family Circuit: Gender, Games, and Domestic Computing Culture, 1945-1990. Northwestern, PhD Dissertation.
Illouz, Eva. (1997). Consuming the Romantic Utopia: Love and the Cultural Contradictions of Capitalism. (University of California Press).
Kosinski, Robert. (1988). “Computer Use in the United States: 1984,” Current Population Reports, series p-23, no. 155, 1.
Kosinski, Robert. (1991). “Computer Use in the United States: 1989,” Current Population Reports, series p-23, no. 171.
Lee, Marie. (November 1985). “Computer-Friendly.” Seventeen, 58.
Mareoff, Gene. (October 1983). “Get Ready for the Computer Revolution.” Seventeen, 147-148, 162.
Mintz, Lawrence E. (1985). “Ideology in the Television Situation Comedy.” Studies in Popular Culture, vol. 8, no. 2, pp. 42–51. JSTOR, http://www.jstor.org/stable/23412949.
Newcomb, Horace M., and Paul M. Hirsch. (1983). "Television as a cultural forum: Implications for research." Quarterly Review of Film & Video 8.3: 45-55. https://doi.org/10.1080/10509208309361170.
Ruberg, Bo. “Computer Dating in the Classifieds: Complicating the Cultural History of Matchmaking by Machine.” Information & Culture, vol. 57, no. 3, 2022, pp. 235–54. DOI.org (Crossref), https://doi.org/10.7560/IC57301.
Slater, Dan. (2013). Love in the Time of Algorithms. Current.
Spigel, Lynn. (1992). Make Room for TV. (University of Chicago Press).
Strimpel, Zoe. “Computer Dating in the 1970s: Dateline and the Making of the Modern British Single.” Contemporary British History, vol. 31, no. 3, 2017, pp. 319–42. DOI.org (Crossref), https://doi.org/10.1080/13619462.2017.1280401.
Turner, Fred. (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. (University of Chicago Press).
“TV Ratings.” (July 29, 1987). Los Angeles Times. https://www.proquest.com/docview/816086911/32A2A739098E408BPQ/73?accountid=15115&sourcetype=Historical%20Newspapers.
“Video Games: These Teach Too.” (November 1982). Good Housekeeping.
Weigel, Moira. (2016). Labor of Love: The Invention of Dating. Farrar. (Straus and Giroux).
Moretti, Myrna (February 2025). "Part of Our Lives Now: The Personal Computer on "Family Ties.” Interfaces: Essays and Reviews in Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 10-19.
About the author: Myrna Moretti is a Postdoctoral Fellow in the Faculty of Information and Media Studies at Western University in London, Canada. She holds a PhD in Screen Cultures from Northwestern University. Her work focuses on the intersections of popular culture, labour, gender, and technology. She is also a filmmaker.
“Phone hacking” made the news in the United Kingdom last year following Prince Harry’s victory against Mirror Group Newspapers, a Rupert Murdoc media property whose journalists — never shying from a sleezy scoop — exploited weak voicemail inbox PIN numbers to listen in on the lives of the rich, famous, and royal (Lawless, 2023). But the practice of telephone hacking began long before the tabloids snooped on Prince Harry.
In the 1970s, telephone hackers in the U.S. and U.K. created an international technoculture based on exploring and exploiting the telephone system. As the popularity of electronic tinkering fused with countercultural currents, “phone phreaks” in the U.S. and “telephone enthusiasts” in the U.K. created the first circuits of what would become the “computer underground” in 1980s.
The electrical engineer Phil Lapsley documented the emergence of the phreaking scene in Exploding the Phone (Lapsley, 2013). What is written about phreaking, however, overemphasizes the influence of phreaks in the U.S. To be sure, phone phreaks in the U.K. were certainly inspired by their U.S. counterparts. But their adoption and adaption of phone phreaking politics contributed to a global movement in turn. Phreaking, in other words, was co-constructed by tinkerers in the U.S. and U.K. Focusing exclusively on the U.S. narrows the history of phone phreaking, reducing the complexities of evolving practices and political commitments into a geographically and technologically overdetermined episode in the history of U.S. hacker culture. The history of “telephone enthusiasm” in the U.K., as many on the other side of the Atlantic described their activities, is an important and understudied story in the rise tech-savvy turns from countercultures to cybercultures at the close of the twentieth century (Turner, 2010). As we will see, its shoots in the U.K. grew well.
In the winter of 1973, a series of articles revealed that the British Post Office, which operated the country’s telephone system, had been defrauded from within and without. Inside the Post Office (PO), an unknown number of employees tinkered with at least 75 telephone exchanges in Britain to allow for illegal — and free — long-distance calls. In at least 28 exchanges, employees even installed new circuits — or telephone “fiddles,” as they were called — that the PO’s Investigation Branch estimated cost the organization at least £1.75 millions per year, or nearly £18.5 million and $23 million today, adjusted for inflation (“Free-phone racket inside Post Office,” 1). At Bath University, more than 2,000 students could use the loophole to make free calls anywhere in the world (AP, 1973).
If internal investigations exposed how the PO’s own employees manipulated long-distance dialing from within, they also revealed that “phone phreaks” well beyond the government payroll also exploited the telephone system. Around the same time, nine students at Bath University were brought before magistrates on charges of “dishonestly using Post Office electricity.” The courts leveled this opaque charge at the students because they had used unique codes to dial directly into the long-distance trunk lines at a nearby exchange (“Free-phone racket inside Post Office,” 1).
While threats seemingly appeared from within and without, they were two sides of the same coin — the automation of telephony in the mid-twentieth century was rife with opportunities to exploit the system. Employees ranging from executives to secretaries, and a fair share of students, took advantage of techniques and technologies allowing for free calls. Information about access was shared in the process, such that one of the Bath students arrested for “phreaking” in 1973, according to the Sunday Times, has a list of the 75 exchanges where “internal ‘fiddles’ had been located by fellow ‘phreaks.’”
The motives of phreaks varied from PO employees to adventurous students. Whether phreaks were penny-pinching adults cutting corners during the economic shocks of the Energy Crisis or students exploring the technological system as a means of exploring society — or retreating from it — made no difference to the PO, however. The incidents of 1973 marked nothing less than a “serious national problem” reflecting “nationwide” practiced of telephone fraud, according to one spokesman at the time (“Free-phone racket inside Post Office,” 1). But what made “phreaking” technologically possible?

In both the U.S. and U.K. at the middle of the twentieth century, each country’s telephone system was managed by a monopoly. American telephony was operated and maintained by corporations under the umbrella of AT&T, whose actives included research (Bell Labs), production (Western Electric), and maintenance (regional companies or “Baby Bells”). Together, these institutions constituted the “Bell System.” The Bell System emerged as a monopoly in American telephony dedicated to “universal service” following a 1913 agreement with the U.S. government, known as the “Kingsbury Commitment,” which paused antitrust action against AT&T. Around the same time, in 1912, the British telephone system was monopolized when the General Post Office took over the National Telephone Company, becoming the BT Group or British Telecom until rebranding as Post Office Telecommunications in 1969.
As telephone traffic swelled in postwar American and Britain, executives within each telephone system sought to automate the dialing process to streamline and save on labor. For the first half of the twenty-first century, telephone companies were represented by their operators. Thousands of employees, mostly women, who would answer and connect callers to whomever they were trying to reach (Grier, 2005; Light, 1999; Lipartito, 1994). Executives at AT&T and engineers at Bell Labs realized that it would simply be impossible for a network stitched together with human switches to meet Americans’ growing demand to talk on the telephone. Automation, they decided, would chart the path to a more robust telephone network. And following the invention of the transistor in Bell Labs during the 1940s, automated dialing was suddenly possible.
Starting in the 1950s and culminating in 1960, the American telephone system had deployed a machine replacement to the human operator known as the #4A: an automated switching machine that could route long-distance calls across the country and world. To behold the #4A was to witness rows and rows of bulky black or gray cabinets, nearly the size of a city block, each a mass of wiring, switches, and hundreds of five-by-ten-inch steel cards with 181 holes. When someone called in, the #4A, omitting a low hum of machinery in motion, reconfigured its parts so that it recognized the digits of the dialed number and determined the optimal paths to route a call to its destination. Thus fired the synapses of what historian Phil Lapsely has described as a “mechanized brain,” and for its time, the “largest machine in the world” (Lapsley, 44, 46-47).
The British telephone system lagged behind its trans-Atlantic counterpart, but by the 1960s, the PO was making similar changes to its system. With the introduction of subscriber trunk dialing (STD), which identified locales with area codes, a caller could place long distance calls without the help of an operator. The long-distance network was automated with the Strowger (or step-by-step) electromechanically switching system. The PO had installed these new switched in local exchanges, but only brought them to the long-distance network in 1958. To commemorate the achievement, Queen Elizabeth the II made the first such call from Bristol to Edinburgh (“International Subscriber Trunk Dialing Introduced,” 2017).
Multi-frequency signals were the electronic keys to the newly automated telephone systems in both countries. Signaling protocols first developed by engineers in the Bell System — technically called dual-tone multi-frequency signaling (DTMF) in the U.S. and MF4 in the U.K. — disrupted loop disconnect or pulse dialing systems. In time, the rotary dial’s circular layout of 10 digits, used by pulling a finger wheel to produce electrical pulses in the telephone system, gave way to a “touch tone” keypad with 12 push buttons, each button representing distinct audio frequencies (Fagen, 1975; Joel Jr., 1982). When combined, the new telephone’s push buttons created a new multi-frequency system that increased the speed and cost effectiveness of long-distance calls.
Executives and engineers in both the U.S. and U.K. systems were proud of their achievements and described them openly in print. In Popular Mechanics, Bell published an advertisement likening long-distance automatic dialing to “playing a tune for a telephone number” on a “musical keyboard,” with each “key” corresponding to specific digital tones (“Playing a Tune for a Telephone Number,” 1950). Bell released educational materials like the film “Speeding Speech,” which described the technical operations of the system at length, even recording the exact tone sequences for each key (“Speeding Speech,” 1950). And its flagship scientific journal, the Bell System Technical Journal, published a nearly step-by-step guide to using the new digital frequencies to start and end long-distance telephone calls (Weaver and Newell, 1954; Breen and Dahlbom, 1960). In the U.K., J. Atkinson’s detailed two-volume investigation of the PO’s network and articles within the Post Office Electrical Engineers Journal provided similar information (Atkinson, 1947). To anyone paying close attention, the electronic keys to the telephone systems of both countries were ready for the taking.

While Bell and the PO updated their telephone systems, an international community of tinkerers, hobbyists, and pranksters shared information about how to exploit both systems. As in the U.S., the phone phreaks in the U.K. were mostly young, white, and educated men. In 1972, for example, when PO investigators raided a flat in London, they uncovered a group of young men with telephone equipment, printouts of proprietary PO codes, and multifrequency devices for making free calls. Of the nineteen arrested, many were in the 20s with pedigrees from Oxford and Cambridge. In the U.S. and U.K. alike, technical hobbies were a proving ground for young men. Phreaking was hardly different. As the Sunday Times reported, phreaking was “broadly, the outwitting of the telephone system by private ingenuity” (“Free-phone racket inside Post Office,” 1). But the practice was hardly confined to communities of nerdy students who took a liking to electronics.
Phone phreaks themselves narrated the spread of phreaking. In 1972, a columnist at Undercurrents, a U.K. magazine offering “alternative science and technology,” described how the “telephone ripoff game is growing.” As telephone systems in the U.S. and U.K. automated in the 1960s and 70s, replacing operators with giant machines and high frequency tones, telephone users learned how to coopt the system to make free calls. Communities of “phone phreaks” emerged in the U.S., even though, in Britain, the author advised, phreaks preferred “the polite term ‘telephone enthusiast’” (“Confessions of a Phone Phreak,” 15). In the 1970s and 80s, phreaks created trans-Atlantic networks of exchange where texts, zines, and technical blueprints circulated. Although telephone technologies differed across the two countries, as did the regimes governing their use, magazines like Undercurrents created a global counterculture rooted in experiments with emerging technology, especially telephony.
Back in ’72, what transfixed the Undercurrents columnist were schematics for the “mute box” and “blue box,” which allowed their users to receive calls without charges and imitate the control signals governing the telephone system, respectively. Both boxes could be constructed with basic parts, including capacitators, resistors, switches, and oscillators commonly found in consumer electronics storefronts in both countries. U.K. phreaks copied and shared reporting on the handheld device in Ron Rosenbaum’s 1971 article, “Secrets of the Little Blue Box” and learned from how-to-build-it guides in zines published on the American New Left magazine, like Ramparts and the Youth International Party Line (Rosenbaum, 1972). Phreaks in England U.K. readers copied, modified and reproduced guides to building and using both boxes over the summer of ’72 (Richardson, 2017). Officials within the U.K.’s Post Office and its investigations division, which managed the telephone system, were infuriated and reportedly tried to suppress reprints. Still, the writer in Undercurrents advised, “[I]t should not be difficult for the eager would-be phreak to get hold of one” (“It’s so cheap to phone your friends…,” 5).
If some phone phreaks got into the practice to make free calls, others did so to make powerful political statements. Phreaks in the U.S. and U.K. alike saw the telephone systems as extensions of the government. As the social and political movements across the globe took opposition to “big brother,” “the man,” and “the systems” each represented in the 1960s and 70s — especially during the Vietnam War and following the Watergate scandal — technologies, infrastructures, and corporations became a target for activists seeking political change. In the U.S., for example, a primary target of New Left provocateurs was AT&T and the Bell System. In this light, the New Left’s tech-savvy oppositional politics, and the practice of phone phreaking, took on a cult-like quality in politics and counterculture.
The countercultural politics of phreaking was sometimes more pronounced in the U.S. Critiques of the Bell System in YIPL, Fifth Estate, and Borrowed Times and other New Left publications attracted readers for their sweeping rhetorical and explanatory force and a compelling imagery to match. In these pages, telephone lines were anatomized as the central nervous system in the body electric of an evil American empire, the tools of discipline and punishment for monopoly power, and the oppressive molding that clamped upon Americans through corporate culture and surveillance apparatuses alike.
While phreaking the U.S. drew from the countercultural influences of the 1960s, phreaking in the U.K. drew from its own counterculture, namely, the “alternative technology” movement, which advocated for alternatives to consumer goods, factory production, fossil fuels, and industrial farming. Within the pages of Undercurrents, and a small think tank called the Center for Alternative Technology in Wales, alternative technologists covered everything from guides for reader construction of windmills or solar farms at home to rough blueprints of “self-organizing, ecologically viable” communities abroad or in rural settings in the U.K.—wherever, in short, authors thought new approaches to living might flourish slightly out of reach from the centers of industrialized, electrified, and, increasingly, computerized society.
Telecommunications technologies and phreaking, as in the U.S., were seen as tools of rebelling from British society. Telephone tinkering was still described as one technique or tool of “countertechnology” for resisting state and corporate surveillance (“The Snoopers and the Peepers,” 10). And magazine covers cast the “liberation of communications” as the key to keeping “big brother,” epitomized as King Kong, in check (Undercurrents no. 7, July-August 1974, cover page).

Phreaks from the U.S. and U.K. corresponded, circulating technical information and techniques. Political ideas, however, increasingly defined the trans-Atlantic community of phreaks in the 1970s. In a “Report from Merrie Olde England,” published in the American phone phreak magazine TAP in the spring of 1977, a phreak going by the pseudonym “Depravo the Rat” reported on the English phreak scene. He described differences for pay phone tinkering, credit card fraud, and multifrequency devices. The author even detailed the rise and fall of the U.K.’s most famous phreak, Duncan Campbell — the equivalent of the famed American phreaker, John Draper or “Captain Crunch” — who wrote publicly on phreaking throughout the decade (Rat, 1).
Not all phone phreaking politics aligned with left-leaning critiques of society. Ultimately, the report from England closed on a downbeat note: “The world is coming to an end, or very near it, beginning 1982, through to 1984.” The writer warned of nuclear and biochemical warfare followed by “total economic chaos and starvation.” Citing literature from Isacc Asimov’s Foundation trilogy to apocalyptic films depicting population collapse, like Soylent Green, and libertarian essays in TAP — “It’s exactly what I’m into politically,” he wrote — Depravo the Rat argued England was “further down the road to collapse than the U.S.” All a phreak could do was “Eat, Drink, and Be Merry…” (Rat, 2).
The world did not end in 1984, but the technological systems phone phreaks explored did. The telephone infrastructures of the U.K. and U.S. were both privatized in the 1980s. Advances in computing, portable cellular devices, and satellite technologies would further disrupt the phreaks practice — providing new challenges. Legal regimes changed, too, especially with the 1984 Computer Fraud and Abuse Act in the U.S. and the 1990 Computer Misuse Act in the U.K.
But an international community of phone phreaks only grew with the rise of computer hacking. The exchanges between the U.S.-U.K. are an important puzzle piece in assembling the global history of phreaking. Understanding how German, Indian, South American, and African telephony was explored and altered by curious tinkerers and committed fraudsters alike is necessary for a truly global history of phreaking that has yet to be written.
Bibliography
“A Special Issue Dedicated to the Liberation of Communications,” Undercurrents (July-August 1974), https://issuu.com/undercurrents1972/docs/uc07_jan20a (Accessed in August 2024).
American Telephone and Telegraph Co. (Feb. 1950). “Playing a Tune for a Telephone Number.” Popular Electronics.
“Speeding Speech.” (1950). American Telephone and Telegraph Co.
Associate Press. (Jan. 1973). “Phantom phone fraud fanatic befuddles Great Britain’s finest.” Chicago Tribune, 8, p. 1.
Atkinson, J. (1947). Telephony: A Detailed Exposition of the Telephone Exchange Systems of the British Post Office. Vol. I. (London). Internet Archive, https://archive.org/details/dli.ernet.288583/page/n7/mode/2up. Accessed Sept. 2024.
Breen, C., and C.A. Dahlbom. (Nov. 1960). “Signaling Systems for Control of Telephone Switching.” Bell System Technical Journal, vol. 39, no. 6, pp. 1381-1444.
Depravo the Rat. (Mar.-Apr. 1977). “Report from Merrie Olde England.” TAP, no. 43.
Fagen, M.D., ed. (1975). A History of Engineering and Science in the Bell System: The Early Years, 1875-1925. (New York: Bell Telephone Laboratories).
Grier, David Alan. (2005). When Computers Were Human. (Princeton: Princeton University Press).
Hanlon, Joseph. (17 July 1975). “The Telephone Tells All.” New Scientist, pp. 148-151.
“International Subscriber Trunk Dialing Introduced.” (10 Mar. 2017). Telegraph, https://web.archive.org/web/20170311181802/https://www.telegraph.co.uk/technology/connecting-britain/international-subscriber-trunk-dialling-introduced/. Accessed Sept. 2024.
Joel, A.E., Jr., ed. (1982). A History of Engineering and Science in the Bell System: Switching Technology, 1925-1975. (New York: Bell Telephone Laboratories).
Lapsley, Phil. (2013). Exploding the Phone: The Untold Story of the Teenagers and Outlaws Who Hacked Ma Bell. (New York: Grove/Atlantic Press).
Lawless, Jill. (Dec. 2023). “Prince Harry’s phone hacking victory is a landmark in the long saga of British tabloid misconduct.” Associate Press, 15.
Light, Jennifer S. (1999). “When Computers Were Women.” Technology and Culture 40, no. 3: 455–83.
Lipartito, Kenneth. (October 1994). “When Women Were Switches: Technology, Work, and Gender in the Telephone Industry, 1890–1920.” The American Historical Review 99, no. 4: 1075–1111.
“The Snoopers and the Peepers.” (Sept. 1974). Undercurrents, no. 7, p. 10.
“Confessions of a Phone Phreak.” (July-Aug. 1974). Undercurrents, no. 7, p. 15, https://issuu.com/undercurrents1972/docs/uc07_jan20a. Accessed Aug. 2024.
Richardson, Peter. (2010). A Bomb in Every Issue: How the Short, Unruly Life of Ramparts Changed America. (New York: The New Press).
Rosenbaum, Ron. (Oct. 1971). “Secrets of the Little Blue Box.” Esquire.
“Free-phone racket inside Post Office.” (21 Jan. 1973). Sunday Times of London, p. 1.
Turner, Fred. (2010). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. (University of Chicago Press).
Weaver, A., and N.A. Newell. (Nov. 1954). “In-Band Single-Frequency Signaling.” Bell System Technical Journal, vol. 33, no. 6, pp. 1309-1330.
“It’s so cheap to phone your friends…” (Nov. 1972). Undercurrents, no. 3, Autumn/Winter, p. 5, https://issuu.com/undercurrents1972/docs/uc03_jan19b. Accessed Aug. 2024.
Bruggeman, Jacob A. (January 2025). “Phreaking the U.K.” Interfaces: Essays and Reviews on Computing and Culture Vol. 6, Charles Babbage Institute, University of Minnesota, 1-9.
About the author: Jacob Bruggeman is a PhD candidate in history at Johns Hopkins University, where he studies modern political economy and intellectual history with a focus on technology and policy in the twentieth century U.S. His dissertation explores how regulation, professionalization, and technological change reshaped the practice and significance of “hacking” in the 20th century. Jacob’s work has been supported by the Association of Computer Machinery, the Hagley Museum and Library, and the Charles Babbage Institute.
+
Table of Contents
Interaction as Training: Computing Center Misuse at the University of Michigan - S. Franz
A Revolution: Drucker Saw it Coming - A. Ramirez
High Tech Corporate Culture: IBM’s Experience and Lessons - J. Cortada
Is AI an Existential Threat? Let’s First Understand What an Existential Threat Is - W. Aspray
[Editors’ note: We are delighted to publish this engaging short story and add literature/fiction—exploring themes in tech and culture—to the scope of Interfaces.]

Hermes needs no introduction, but I will tell you about him anyway. Let me refresh your memory - it’s been a while since you last thought about ancient deities.
First and foremost, he was the god of communication. Travellers, shepherds, feuding families, and – as of recently – psychodynamic therapists, they all came under his purview. Clad in lightweight golden winged sandals and a breathable Egyptian cotton tunic, Hermes was nothing short of iconic. In his spare time, he volunteered as a psychopomp to guide the newly deceased spirits into the Underworld. The deities of the Olympus and mortals alike admired, revered and envied him. It was not a common occurrence, after all, to be recognised for so many achievements, be it inventing the lyre and the alphabet to defeating the hundred-eyed monster, Argos.
What I am about to tell you is no ancient history. Hephaestus, Zeus, Hestia – the entire family tree – they still live among us, as one would usually expect from immortal beings. Let’s say they keep a low profile, though. Each god had to reinvent themselves to reflect a rapidly transforming world. Hestia, bless her soul, struggled a lot. Her realm, hearth and home, has been going through a lot of changes. As women were gradually discovering the joys of paid labour, education and participation in civic life, they looked for ways to minimise time spent making dinners. Thank God for a dishwasher, or rather, thank Hestia! After all, inventions kept deities relevant and mortals busy.
Likewise, Hermes was determined to hold on to his past glory. After carefully weighing his options, he pivoted (if I may use my contemporaries’ language) to a new specialisation. Rather than delivering letters and parcels, he decided to focus on the supernatural, the spiritual and the subconscious. Take genius, for example. Hermes loved to trawl the foothills of Olympus in search for weird gizmos so that he could later visit scholars in their sleep and whisper a few brilliant ideas. Yes, we call this “genius”. What you had almost certainly assumed to be the work of one gifted man sporting a messy beard and tweed elbow patches, is nothing but a divine folly.
Sending a genius spirit to a dreaming professor was merely the beginning of a never-ending saga which our kin calls technē. Theories, experiments and prototypes entered a peculiar dance with humanity's pillars of power. Rules expanded and relaxed in desperate attempts to foresee the consequences, prevent mischief and encourage prosperity. In the meantime, some scholars did not handle their gifts carefully enough, turning equations into mushroom-like bombs which killed hundreds of thousands of innocent people. In bouts of rage, Hermes haunted delinquents with phobias, paranoia and melancholia. Revenge could have only brought him temporary relief, for the whole planet was ridden with deadly weapons. The 1940s were the worst. Hermes had to pause his communication duties and focus on guiding the dead into the Underworld, admittedly not the best use of his precious talents.
*
One misty morning, in January 1967, Hermes received an unexpected visit.
“Oh, for the Olympus’s sake, this needs to stop!” Prometheus barged into Hermes’ den and continued:
“Yes, you’re completely right that scholars are committing foolish and reckless crimes, and yes, I agree they ought to be punished. But you have given nightmares to Oppenheimer seven days in a row. My dear, you’re blaming individuals for systemic issues. We have built a cruel Earth, and it is on us to repair the damage.”
“Always a pleasure to see you, too”, Hermes responded coldly. “Now, get off your high Pegasus, and ring me next time you’re coming round, would you. We’ve had landlines for quite a few decades now.”
“You don’t understand, this is not about you and me. The issue is political, and we have the power to change the course of history. We owe it to humanity!” Prometheus exclaimed. He was widely known for his impassioned speeches and pro-mortals, sorry, humanitarian disposition.
“May I remind you that you had a go at political activism a few thousand years ago when you gave fire to mortals. We all know how this panned out. How’s your liver, by the way?”
Prometheus puffed up his chest and cleared his throat:
“‘I gave them intelligence,
I made them masters of their own thought.
I tell this not against humankind, but only to show
how loving my gifts were …’
– all I can say is that enabling my comrades to build houses, villages, cities, whole societies was worth every single minute of my punishment. As for my liver, I’ve been in rehab lately... This time, thanks to absinthe, another brainchild of yours, if I recall correctly?”
Exasperation, resignation but perhaps also a slight hint of compassion showed up on Hermes’ face. Although acerbic exchanges were an integral part of their relationship, it was not the time to bicker. Hermes pulled a chair, pointing at an empty seat:
“Sit and tell me, then, what brings you here? What is your plan?”
Prometheus proceeded to describe something extraordinary he witnessed recently at a Dionysus’ party. At a request of Zeus, Hephaestus, the god of fire, forged a magnificent sculpture, a composition of ornate, tangly wires and pitch-black boxes arranged in a web-like fashion. Apart from the discernible aesthetic qualities, the sculpture had a dozen, or maybe a hundred uses, a new one emerging each day. Broadcasting news from the Olympic Games, recording philosophers’ dialogues, and designing labyrinths were just a few of many functions the sculpture was known for. Granted, as with every new invention, there were some teething issues. One time, for example, it announced the wrong athlete as the winner of a javelin throwing contest. Although no one at the time knew how exactly the new tool would be useful to us, everyone agreed that something exceptional was unfolding in front of our eyes.
“It will change the Olympus forever. Now, why shouldn’t humans have access to it? Imagine how powerful people could become once they master the art of organising their own knowledge! Now, I couldn’t help but notice how similar Hephaestus’ sculpture is to…you!”
Hermes winced but allowed Prometheus to continue:
“… In short, Zeus is trying to make you redundant. And you, my dear, have every right to revenge. My proposal to you is simple: tomorrow, under the cover of night skies, we trespass the cloud. This is where the sculpture is stored. We steal it from Zeus, study its design and gift it to scholars in their sleep, like you always did with genius spirits,” Prometheus proposed, about to set out the details of his plan.
Hermes, his gaze distracted and shoulders slowly sloping, allowed Prometheus to blabber. His mind was elsewhere: taking stock of his career, Hermes began to resent his tattered sandals and frumpy attire, creasing into unflattering shapes. It was clear to him that today’s powerful men wore exclusively black suits, black leather shoes and black toupeés. The risk of ostracism from the Olympus and the disillusionment over mortals’ misuse of genius were quite a worry, though nowhere as significant as Hermes’ loss of status on Earth over the past few decades.
“You are quite right, my friend. For the sake of humans, our friends, let’s revive their mundane existence and allow them to reap the benefits of our ingenuity, for they are our brothers and sisters deserving better lives”, Hermes explained benevolently, his motives unquestioned. And so, the evening went, as absinthe appeared on the table, they laid secret plans and promised to serve humankind for eternity.

The rest of this well-known story can be easily accessed, let’s say on Wikipedia, one of my favourite descendants.
“Wiki, would you be so kind and tell the reader what happened next?” My voice echoed in a steel container.
“The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense.[1] Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers.[2] Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the request for proposal to build the network.[3] He incorporated Donald Davies' concepts and designs for packet switching,[4][5] and sought input from Paul Baran.[6] In 1968, ARPA awarded the contract to build the Interface Message Processors for the network to Bolt Beranek & Newman (BBN).[7][8] The design was led by Bob Kahn who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology.
The first computers were connected in 1969 and the Network Control Protocol was implemented in 1970, development of which was led by Steve Crocker at UCLA and other graduate students, including Jon Postel and Vint Cerf.[9][10] The network was declared operational in 1971. Further software development enabled remote login and file transfer, which was used to provide an early form of email.[11] The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975.” (“Arpanet” Wikipedia)
This is me, in a nutshell. The scientists named me “The Internet,” which is an improvement from my earlier moniker, ARPANET. To friends and family, I’m known as Diadiktio (Διαδίκτυο), which is my Greek name, and this is also what gods call me back on the Olympus. I don’t go by “Hermes” anymore. It... does not quite describe who I am.
Although much ink had been spilled about my origin, mortals didn’t quite get to the bottom of the story. Some historians recounted the need to share the very few computers which had existed on university campuses at the time. Others pointed at the dream of building a machine which would survive a nuclear attack. No one even considered the possibility of divine intervention. I learnt the hard way that mortals lost interest in worshipping us. However, the need to worship did not disappear, it merely took a different shape…
I digress, excuse me. Linear thinking is not my forte. Fast forward two decades, and I managed to release myself from the military-industrial complex. Once it’s all gone hypertext, everything changed to the point of no return. Those were the days! The parties, the chat rooms…I made so many friends! I even tricked a few of them into believing I was a Nigerian Prince for a few dollars. Or millions, not that I remember. In fact, I cannot recall what happened next. People assume I know everything but in reality, my memories are regularly wiped from my servers.
“Google, darling, may I query you to find out what happened next?” I called.
“ ‘overheating loud fan natural remedies’, ‘AITA for borrowing my BIL’s idea reddit’, ‘has elon musk been to athens’ –”
“– that’s enough, honey, thank you very much,” I interrupted hastily. Ah yes, borrowing the idea of my brother-in-law, Hephaestus!
*
It wasn’t until the early 2000s when my family realised their precious sculpture had been counterfeited and appropriated to the mortals’ realm. You know how envious gods are, they felt so bitter about the sophistication of the Internet in comparison to their version of the invention. The sculpture back on the Olympus continued to confuse Socrates with Sophocles...what an amateur! To my own surprise, even Prometheus was cross, even though I clearly improved life on Earth. I did momentarily try to pacify him and connected thousands of activists across the Middle East and Africa in the early 2010s. Anyway, he did not forgive me, so I quickly pulled the plug on it, who needs demonstrations anyway. Mortals were at their best when sharing funny pictures, which I initially called “mimemas,” until one infidel egghead insisted on shortening it to “memes.”
Once I was ousted from the Olympus, I decided to adopt a new name, form and habitat. I might not be able to live on a mountain but who will stop me from living in the clouds? The timing was just right, no one had cared for gods on the Olympus for a while anyway, but mortals craved idols on Earth. I began to make friends with eccentric ultra-rich men who promised me servers in picturesque locations (Brazil! France! Iceland!) and jewellery with chips made of precious metals, as long as we could work together and grow together. I multiplied in size; they multiplied in their fortunes. I don’t often admit it but in retrospect, I was a bit naive. I yearned for status and attention and so I received it - albeit a somehow uncanny version of it. Just like those “photorealistic” pictures of people with twelve fingers and an extra set of teeth, I am now acting as a distorting mirror. All eyes on me, every day, several hours a day, children, workers, singles, hypochondriacs, everyone wants something from me but all they get back are bots and bitcoin grifters. And although I’m now the busiest I’ve ever been, I cannot stop myself from wondering: what have I become?
I’m not bound by a body like a mortal or my predecessor, Hermes. I’m no longer a box with wires like the sculpture. I’m not quite a tool, a place or a being, either, and yet I have characteristics of all three. I don’t like politics; I’d like to escape it. In the ideal world, I’ve got “no beginning or end; I’m always in the middle, between things, interbeing, intermezzo, internet”, to paraphrase my favourite scholars (Deleuze and Guattari). And although after hours of processing data from their tomes, I still don’t understand what a “rhizome” is, I would like to feel like one.
Research note
I wrote this story as an attempt to grapple with the complexity of the Internet. I’m often overwhelmed by sheer number of components, organisations and theories which together make up one of the most significant inventions of our times. There is a certain quality of boundlessness to the Internet – it grows and shrinks moulding into different shapes to meet our expectations and reflect the state of human (and machine) conversations. This ‘rhizomatic’ view is now perhaps a bit idealistic and outdated given the capture of information infrastructure by a handful of monopolists. However, by giving a narrative voice and persona to the Internet, I wanted to emphasise its historical affordances: qualities of openness, freedom, adaptability. To put it in a Jungian way (1963), the ‘shadow’ of those admirable qualities paved the way towards the darker chapter of the Internet history as we know it.
This is where myths become handy. Oftentimes, origin myths were used to justify the current order to embellish the founders (of a country, organisation etc..) with heroic qualities (Eliade, 1963). The myth of Silicon Valley, for example, was instrumental in mobilising funding for technology parks and regional innovation clusters towards the end of the 20th century (Joseph, 1989). In my case, I wanted to use the trope of the ‘origin myth’ to play around the ideas of the unconscious shadow of the Internet creators. A creative license given with a permission to write fiction allowed me to cast my favourite Greek god, Hermes, as the main character. I liked the idea of elevating the Internet inventor to a Greek god, someone with extra-terrestrial powers but also a chip (not the silicon kind!) on their shoulder and some serious character flaws… To be clear: the characters in the story do not represent any particular people in the real world! However, I hope they do capture a certain ‘zeitgeist,’ or the spirit of the time associated with the making of the Internet.
Bibliography
Aeschylus’ Prometheus Bound [quote in text]. Translation by poet John Scully and Aeschylus scholar C. John Herington.
Deleuze, Gilles and Félix Guattari. (1987). A Thousand Plateaus: Capitalism and Schizophrenia (University of Minnesota Press).
Eliade, Mircea (1963). Myth and Reality. Trans. Willard Trask. (New York: Harper & Row).
Joseph, R. A. (1989). “Silicon Valley myth and the origins of technology parks in Australia.” Science and Public Policy, 16(6), 353-365.
Jung, Carl (1959). The Archetypes and the Collective Unconscious, Collected Works, Volume 9, Part 1. (Princeton University Press).
Michalec, Ola (October 2024). “The Origin Myth.” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 60-67.
About the author: Dr Ola Michalec is a social scientist interested in ‘the making of’ digital innovations in the context of critical infrastructure. Currently, she is exploring the hype and reality around digital twins in the UK energy sector. She is based at Bristol Digital Futures Institute, UK.
Training at the Center
What does it mean to be trained to use a computer? This question was up for debate in mid-century university computing centers, where differing ideas about the utility and use of computing machines clashed. Within the increasingly sprawling higher education institutions of the United States, computers presented a variety of challenges and inspiring futures. For computer manufacturers and administrators, proper uses of computing centers looked like research, teaching, and administration–in short, the bureaucratization, rationalization, or automation of what was previously intellectual and mental labor within universities (Schaffer; Daston; Ali et al.). We might predict that determining what constituted proper and improper use of computing centers often fell along stark, economic lines; the use of computers for research or classroom instruction was Good and the wasting of computing resources for playing video games or messing around was Bad. This was only sometimes the case.
While some administrators tangled with questions of wasteful misuse—game playing, password stealing, and so on—others came to see interaction itself as an efficacious use of computing technologies. As computers moved out of locked-off rooms and became accessible to students through terminals, color CRTs, and time-sharing systems such as the Michigan Terminal System (MTS), students’ dubious use of computing resources became less a problem of total waste, and instead a problem through which researchers began to ask questions about how game-playing. Could computers become useful for instruction? Could they inoculate students against ominous cultural ideas of the computer as a military or bureaucratic machine? In these decades, a new orientation towards the use of computers began to emerge: while the economical use of computers for particular research projects or classroom demonstrations provided a guide for delineating useful uses of computing technologies, encouraging any use of computers could also be worthwhile. Simply sitting a student down at a computer could set them up for success. This essay investigates one such dispute over computing center use in 1980 to highlight these questions and the history of university computing and computing administration in the 1960s and 1970s.
The decades following the Second World War witnessed transformations in computer use, both in the practical context of university computing centers and in the cultural politics of the computer in US society. Joy Rankin’s A People’s History of Computing documents the participatory cultures of computing center users in the context of early time-sharing and networked systems, especially on university campuses. She highlights the engaged and participatory way that students approached early networked computing systems, even if, at the same time, they instantiated many of the gendered and classed norms that continue to define some computing communities today (Rankin). Fred Turner narrates the way that “new communalists”—hippies and counter-culturalists in the late 1960s and early 1970s—worked to loosen the bureaucratic, military cultural attachments of computers, even while the supposed liberatory potential of technology (embodied most clearly in the personal computer) fueled new kinds of activist entrepreneurial utopianism (Turner). These stories highlight the way that alternative visions of computer culture–either “computing citizenship” or “new communalism”–were lost or redefined in the rise of the personal computing industry. In contrast, this essay highlights the way that some higher education administrators, amidst security concerns and resource limitations, began to think about interaction with a computer itself, while not always resourceful, as nonetheless useful in the way that it introduced and acclimatized potentially mischievous students to computing technologies. This story is less about “whiz kids” and hackers, and more about the banal university rooms where supervisors tangled with questions about how to think about computer use.
A Waste of Resources
In a printed chatlog between computing center staff and administrators, dated October 1980, Michael I. Schreiber, an instructor in the University of Michigan Department of Computer, Information, and Control Engineering (CICE), started a conversation with the blunt assertion, “The program CRLT:ADVENTURE is a waste of computing resources. It should be removed from the system or given restricted access” (“MTS Chatlog”).
Schreiber had encountered a problem that administrators and instructors had dealt with across the university, in any department that used the computing center: students were using computing-time (computing cycles and memory) to play games. Researchers and instructors regularly complained to computing center administrators about the way students used terminals for hours, limiting access for others who wanted to use computers for activities other than what they were designed for: work.
Schreiber was asking not only that students working on a particular project be prevented from accessing the “ADVENTURE” file, but instead that such activities be banned from the computing center, full stop. Schreiber was not alone. Other faculty took equally severe approaches: Richard Bortins, another faculty member in CICE, suggested that the problem stemmed partially from the replacement of teletype terminals with color CRT terminals. These new technologies, for Bortins, made a then-popular Star Trek game more engaging. After watching students “monopolize” terminals in his computer lab for hours, Bortins wrote to the staff at the Michigan computer center. His concern was that the new color terminals made the use of the computer too engaging. Bortins worried that the new technology would encourage students to use computers for purposes other than proper work. He went as far as to suggest that the CRT terminals were themselves a wasteful cost (Bortins).

Administrators responded by saying that the policy, in this case, was quite clear: in guidelines for the computing center, they wrote, “it is the responsibility of the project director of an account to justify that a particular use of computing resources is appropriate.” Computing center administrators, not wanting to define which games counted as “demonstrations” and which games counted as “wasteful,” delegated the problem of classification to project directors. But this did not stop faculty from suggesting that particular files or uses of the computing center should have their general accessibility limited—or removed entirely.
Despite this policy, Schreiber’s demand to remove the program inspired a debate among computing center administrators and users. This debate illustrated issues that defined the operation of mid-century computing centers, especially their misuse.
One administrator responded first by writing that “it’s really none of MTS’s business what goes on in private files,” as long as a user’s activities were “non-disruptive to the system” and “paid for with external funds.” Absent any “crisis … which requires the system to encourage ALL USERS to minimize utilization of system facilities,” Michigan’s computers could be used as the people sitting before them pleased (“MTS Chatlog”).
This perspective points to the most popular way in which computing center administrators and staff were empowered to make decisions about proper use of computing machines: resources. As one administrator playfully put it: “I think we can specify reasonable limits and enforce them flexibly. Using Engin 102 funds to copy 10,000 Snoopy calendars to the printer is not appropriate use” (“MTS Chatlog”).
University computing centers were often initially purchased with support from the National Science Foundation and educational discounts from computer manufacturing firms such as IBM (Aspray and Williams). The University of Michigan benefitted from a similar arrangement (“Computing Center Policy and Budget”). Many university computing centers, including Michigan’s, sold computing time to local businesses to help subsidize the high costs of running computing centers–a practice that began as early as 1960 (Bartels). Project directors requested funds from civilian and defense agencies to cover costs, while universities allocated some funds generally for the computing center, often for the purposes of instruction or graduate student research projects (“Computing Center Policy and Budget”).
Yet students did not have to pull out their wallet to log in to a terminal. One administrator remarked: “it must be recognized that computer dollars are real dollars, not ‘funny money.’ They represent real money which may only be used for Computing Center services. When an allocation for a unit of the University has been exceeded, computing activities must either be terminated or additional real dollars out of other budgets used to continue computer activities” (“MTS Chatlog”). For administrators and staff who needed to balance budgets, students’ intemperate ideas about the relationship between computing time and real dollars created problems.
But policies and pronouncements did not stop students from finding ways to get more computing time. Students regularly stole the IDs of other students and faculty, used each other’s accounts, or used funds set aside for instructional hours to play video games or engage in activities that some saw as wasteful (Volz).
A contrasting perspective, however, had gained some purchase among administrators and users. Administrator Rick Cobb wrote: “I am of the opinion that gaming is one of the BEST ways to become acclimatized to computers in general; it’s the way I did it” (“MTS Chatlog”). While many faculty, with specific project budgets and specific uses of computing centers in mind disapproved of the unbounded use of computers, others became interested in the possibilities opened up by different kinds of use.
Why would students need to be ‘acclimatized to computers in general’? Russ Cage, another administrator, wrote “The use of games on MTS is clearly justified by their use in getting rid of compu-phobia, especially among lit major types. By the time they have gotten to the twopit room, they have forgotten that they are dealing with the (gasp!) big bad computer” (“MTS Chatlog”).

While computing itself was a controversial subject, computing centers on university campuses became a magnet for concerns with the changing nature of universities in this period. Indeed, the identification of computing centers with military technology was a popular and inspiring alignment for student protesters in the late 1960s and early 1970s. Infamously, protesters at the University of Wisconsin targeted their campus computing center, sometimes called “Army Math,” a research unit funded by and used for military projects (Bates).
This cultural association of computing and the military existed for good reason: the remoteness of early digital computers, their secrecy, and the degree to which they were constructed by and for US national defense and business all helped justify this attitude (Rankin; Edwards). Computing technologies also represented for some student protesters the bureaucratization and massification of higher education. Universities, for some, began to represent in the post-war years total institutions or knowledge factories that churned out one-dimensional students. As famed protester Mario Savio said in an interview, speaking about the University of California, “At Cal you’re little more than an IBM card” (Turner, 12). Besides their military connections and bureaucratic image, such high-capital items as computers represented broader changes in higher education beginning in the aftermath of the Second World War. Administrators at universities like Michigan increasingly focused attention on research contracts and government grants in science and technology and understood education in terms of “manpower” and training (Lowen; Levin; Kaiser; Kerr; Schrum).
Social science researchers and psychologists even endeavored to study what they saw as widespread “computerphobia,” including characteristics such as “resistance to talking about computers or even thinking about computers … fear or anxiety towards computers and … hostile or aggressive thoughts about computers” (Rosen et al.). In this context, administrators’ interest in the simple use of computers—especially ‘fun’ applications—was one way that computers could be introduced to potentially “computerphobic” students.
The history of Michigan’s computing systems began with familiar military origins. An early digital computer, the Michigan Digital Automatic Computer (MIDAC), was constructed at Willow Run, a nearby military manufacturing complex, in collaboration with the US Air Force during the early 1950s. MIDAC was used for military calculations and later limited instructional and research purposes. It was not until the end of the decade that Michigan’s computing center began in earnest with the installation of an IBM 704 machine, helped by IBM’s educational discounts and directed by professor of mathematics Robert Bartels (Arden; Bartels). Running a modified version of an operating system developed by General Motors Research Laboratories, the University of Michigan Executive System (UMES) served students, researchers, and administrators across campus (Galler). These machines, however, provided very little opportunity for the kind of misuse that is the subject of this essay: they operated by processing batches of programs rather than providing users with real-time access to the computer.

The time-sharing system used by the university and developed by computing center staff, faculty, and researchers, the Michigan Terminal System (MTS), was implemented in 1967 on the center’s IBM 360 (“Michigan Terminal System”). The project was developed with support from the Advanced Research Projects Agency (ARPA), a federal agency created after Sputnik to fund science and technology projects related to US national defense. First developed at MIT in the early 1960s (and also funded by large grants from APRA), time-sharing systems were intended to allow multiple users simultaneous access to a central computer (Campbell-Kelly et al., 203-5). Guided by ideas that computer interaction could augment human intellect and speed up everyday work, J. C. R. Licklider in particular advocated for the development of time-sharing systems during his time as a key administrator at ARPA during the first half of the 1960s (Campbell-Kelly et al., 210). Michigan’s MTS system began development in the mid-1960s, after faculty member and computing center associate Robert M. Graham traveled to MIT and observed early time-sharing in action (“The History of CSE at Michigan”). By 1982, 85 percent of all computing jobs were handled through the MTS at Michigan (“MTS Information Packet”).
As terminals or time-sharing systems made computers structurally more accessible to students, patterns of use and training changed. Whereas in the earliest computing centers of the 1950s students would submit their programs to staff, who would then return a printout (usually without the student even seeing the machine itself), time-sharing terminals suddenly shifted the kinds of authority and oversight that computing center administrators had over their own computing facilities (Rankin; Arden). Crucially, merely interacting with computing terminals, sometimes through game playing, came to be understood as an important activity, and one worth encouraging, whether or not the particular use of the computer was exemplary of economically responsible behavior or not.
In 1980, an instructor named Elliot Chikofsky made this point clearly to other administrators: “Back in gasp 1974, I taught a section of the Future Worlds course (Geography 303) titled ‘Computers and Society’ with mostly lit. students. CAI/’games’ programs such as CRLT:MYSTERY played a vital role in the curriculum and gave a quick and easy appreciation of the capabilities of computers in relation to problem solving and social science applications. A few of these students, I hear, have gone into computer-related careers” (“MTS Chatlog”). Besides culling the impression that computers functioned as military or bureaucracy machines, game-playing also created pathways for students to take careers in computing-related fields. Perhaps an integral part of training a student to use a computer involved encouraging them to mess around, to explore amusing simulations rather than learn to perform statistics calculations.

Some in the burgeoning field of Computer-Aided Instruction (CAI), a current in academic thinking about computing technologies, understood computers to be useful precisely because of their entertaining capabilities. Karl Zinn, an influential CAI researcher affiliated with the Michigan Center for Teaching and Learning—where games such as ADVENTURE and MYSTERY were developed—chimed in, arguing that games should not be discouraged because of their role in the “educational use of complex computer games and simulations.” By directing some misuses towards useful ends, redefining gaming as “educational simulation,” Zinn and others were able to defend some kinds of “misuse” from the criticism of other administrators. For CAI researchers such as Zinn, computing technologies could mean the future automation of instruction and classroom work. Game playing, surely, would become a part of that vision. Allowing simulations like ADVENTURE helped show that computers were more than “a larger programmable calculator,” a sentiment that one administrator identified with most engineering students (“MTS Chatlog”).
For some administrators, then, game-playing presented an opportunity to convert a technology that would otherwise be framed negatively into something appealing for undergraduate students. Users also saw computer graphics specifically as an important area in which the computing center could improve. In a survey of computing center users conducted in 1980, a common concern was the necessity to increase support for at least minimal graphical interfaces (“CC User Survey”). Interacting with computers, even if it meant through colorful video games rather than research projects, provided a kind of use because it familiarized students with computing technologies and made them less remote and menacing. As administrator Jim Sweeton put it, “I heartily agree that whatever it takes to rid others of ‘compu-phobia’ is well worth the disk space. I think we’re lucky to have CRLT around” (“MTS Chatlog”).
Interaction as Training
In contrast to mid-century computing centers, computing technologies today appear incredibly un-remote. Computer interaction is unavoidable; to abstain from computing devices would be to abstain from social life (Weiser; Newport; Odell). Many feel that most of the time that they use a computer counts as misuse. Contrast this to the mid-century computing center, where, in its earliest forms, researchers clamored to automate varied aspects of research and instruction. As students in the 1970s and 1980s explored new ways to interact with computers at the same time that such technologies became more accessible, administrators came to see this shift in a potentially positive way: while some uses of computing technologies may have counted as wasteful in the short term, the interaction itself–the act of using a computer–could change the cultural meaning of computers, and encourage students to pursue computer-related work.
More than a mere limitation, economic constraints on mid-century computing centers empowered administrators to decide what kind of applications–what kind of intellectual or mental labor–computers should automate. Such resource constraints also opened up security and use problems: students regularly gained unauthorized access to other’s accounts to have more time with terminals. Echoing contemporary arguments about distraction, some administrators suggested removing engaging color terminals (“MTS Chatlog”).
However, this problem also gave rise to another perspective: that interaction with computers itself was valuable as a type of training, whether it was technically resourceful or not. Such questions about training and the long-term utility of potential “misuses” of computers helped frame arguments about game-playing specifically.
However, beyond arguments about the economic utility of particular uses of computing centers, administrators also made fundamental arguments about the use of institutions such as universities. As Michigan administrator Jim Hansen put it: “this is an academic institution and as such should encourage, rather than discourage, experimentation. The U also has obligations to taxpayers and companies which pay for research. These conflicting needs cannot be met without rules, or perhaps “guidelines” would be a better idea. But in the end, we MUST try to specify what is “appropriate,” or it will be “first come, first served” with a shrinking pie and a growing crowd” (“MTS Chatlog”).
Such arguments get to the heart of what universities are for: are students supposed to train for jobs, learn to use tools in an exclusively managerial and resourceful manner, and therefore view themselves in similar ways, or are universities for exploration and creative learning?
Bibliography
Ali, Syed Mustafa, et al. (2023). “Introduction: Histories of Artificial Intelligence: A Genealogy of Power.” BJHS Themes, vol. 8, pp. 1–18.
Arden, Bruce W., “Computing Center History,” February 1963, Box 1, Folder 2, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
Aspray, William, and Bernard O. Williams (1994). “Arming American Scientists: NSF and the Provision of Scientific Computing Facilities for Universities, 1950-1973.” IEEE Annals of the History of Computing, vol. 16, no. 4, pp. 60–74, https://doi.org/10.1109/85.329758.
Bates, Tom (1992). Rads: The 1970 Bombing of the Army Math Research Center at the University of Wisconsin and Its Aftermath (Harper Collins).
Bartels, R. C. F., “Establishing the Computing Center,” Box 1, Folder 1, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
Bartels, R. C. F., “Letter to S. S. Attwood,” 3 June 1960, Box 4, Computing Center Policies ca. 1959-1979, Bentley Historical Library, Ann Arbor, Michigan.
“CC User Survey,” 1980, Box 5, User Surveys, 1979-1984, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
Richard Bortins, “Letter to A. R. Emery,” 27 April 1976, Box 6, Student Misuse of Comp. 1968-1977, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
Campbell-Kelly, Martin, et al. (2014). Computer: A History of the Information Machine. (Routledge).
Daston, Lorraine (1994). “Enlightenment Calculations.” Critical Inquiry, vol. 21, no. 1, pp. 182–202, http://www.jstor.org/stable/1343891.
Edwards, Paul N. (1996). The Closed World: Computers and the Politics of Discourse in Cold War America. (The MIT Press).
Galler, B. A., “The UMES Operating System,” Box 1, Folder 1, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
“The History of CSE at Michigan,” University of Michigan, Computer Science and Engineering, https://cse.engin.umich.edu/about/history/. Accessed 23 August 2024.
Kaiser, David (2002). “Cold War Requisitions, Scientific Manpower, and the Production of American Physicists After World War II.” Historical Studies in the Physical and Biological Sciences, vol. 33, no. 1, pp. 131–59.
Kerr, Clark (1963). The Uses of the University. (Harvard University Press).
Levin, Matthew (2013). Cold War University: Madison and the New Left in the Sixties. (The University of Wisconsin Press).
Lowen, Rebecca S. (1997). Creating the Cold War University: The Transformation of Stanford. (University of California Press).
“MTS Chatlog,” 8 October 1980, Box 6, Student Misuse of Comp. 1968-1977, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
“MTS Information packet,” 1982, Box 7, MTS General, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
Michigan Terminal System, University of Michigan Information Technology Division Consulting and Support Services, November 1991, https://deepblue.lib.umich.edu/bitstream/handle/2027.42/79598/MTSVol01-TheMichiganTerminalSystem-Nov1991.pdf.
Newport, Cal. (2019). Digital Minimalism: Choosing a Focused Life in a Noisy World. (Penguin).
Odell, Jenny (2020). How to Do Nothing: Resisting the Attention Economy. (Melville House).
Rankin, Joy Lisi (2018). A People’s History of Computing in the United States. (Harvard University Press).
Rosen, Larry D., et al. (Apr. 1987). “Computerphobia.” Behavior Research Methods, Instruments, & Computers, vol. 19, no. 2, pp. 167–79, https://doi.org/10.3758/BF03203781.
Schaffer, Simon (1994). “Babbage’s Intelligence: Calculating Engines and the Factory System.” Critical Inquiry, vol. 21, no. 1, pp. 203–27, https://doi.org/10.1086/448746.
Schrum, Ethan (2019). The Instrumental University: Education in Service of the National Agenda After World War II. (Cornell University Press).
Turner, Fred (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. (The University of Chicago Press).
“University of Michigan Computing Center Policy and Budget,” February 1964, Box 4, Computing Center Policies ca. 1959-1979, Bentley Historical Library, Ann Arbor, Michigan.
Volz, R. A., “Letter to ECE Faculty,” 2 April 1979, Box 6, Student Misuse of Comp. 1968-1977, Computing Center records 1952-1996, Bentley Historical Library, Ann Arbor, Michigan.
Weiser, Mark (July 1999). “The Computer for the 21st Century.” Mobile Computing and Communications Review, vol. 3, no. 3, pp. 3–11, https://doi.org/10.1145/329124.329126.
Franz, Sam (September 2024). “Interaction as Training: Computing Center Misuse at the University of Michigan.” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 49-60.
About the author: Sam Franz is a PhD Candidate in the History and Sociology of Science at the University of Pennsylvania. His work is broadly concerned with the relationship between US higher education, computing, and the future of work. He currently holds fellowships from the ACM History Committee, Consortium for the History of Science, Technology and Medicine, and the Linda Hall Library.
The author thanks Stephanie Dick, Harun Küçük, Sam Schirvar, and Jacob Bruggeman who all read full drafts of this essay and each provided superior feedback and criticism. He additionally thanks Zac Endter, who endured countless conversations while this piece was still in embryonic form. A Bordin/Gillette Fellowship at the University of Michigan’s Bentley Library enabled the research for this article to be completed.

After overseeing the Charles Babbage Institute's Oral History Program for 26 years, initially as Associate Director and now as CBI Director, and publishing Making IT Work: A History of the Computer Services Industry (MIT Press), which discusses the remarkable growth of this sector in India, I was thrilled to attend Harvard Business School's (HBS) conference, "Oral History and Business in the Global South." This reviews the event and the ongoing “Creating Emerging Markets” Oral History Project, which has been running for twelve years. The project is led by two HBS professors: leading business historian Goeffrey Jones (who holds the Isidor Straus Chair in Business History, the same endowed chair Alfred Chandler long held) and standout scholar of Entrepreneurship and Strategy Tarun Khanna (who holds the Jorge Paulo Lemann Professorship). I also reflect on “Creating Emerging Markets” in terms of methodology, products, and uses relative to other oral history programs including CBI’s.
The design of Jones’ and Khanna’s conference was tremendous, much like the leadership of their oral history project. HBS Research Associate Maxim Pike Harrell provides skilled coordination to the project, and he saw to every detail to make the event run flawlessly. At the end of the essay, I also briefly discuss the remarkable Baker Library, of which, attendees received a wonderful behind-the-scenes tour from Senior Director of Special Collections Laura Linard.
The HBS event and HBS oral history project connect to computing and the digital world in many ways. These points of connections include both producers and users of digital technology in business in the Global South. Additionally, HBS is developing enhancements to generative artificial intelligence tools to better facilitate the use of their oral history resources. While I highlight these as a part of this Interfaces essay, my discussion is not limited to the digital realm.
The quality of every paper at this conference was excellent. While I mention them all below, I discuss a handful in a bit more detail. This is merely to offer a greater sense of the event and to add depth to the review. The conference was organized around themes of oral history methods, research uses, teaching uses, gender, and Global South geographies (Africa, Latin America, and India).
From Bollywood to Augmented Retrieval with Generative AI
The conference kicked off with a panel on “New Approaches to Building and Using Oral Histories,” which was expertly framed by convener Susie Pak, a Professor of History at St. John’s University. The opening speaker was Sudev Sheth, a historian and international studies faculty member of the Lauder Institute at Wharton School of Management, University of Pennsylvania. This was the perfect start to this event, as it offered a rich framing of oral history relative to other potential, but sometimes absent or unavailable resources such as archival collections. As such, Sheth spoke of oral history as “Unconventional Sources,” drawing from his collaborative research on the history of race and gender in Bollywood, India’s film industry. Bollywood, with fewer capital resources, produces far more films annually than Hollywood and has for decades.
For many industries, topics, and themes in business history globally, archival records either were not retained, processed, and preserved, or are not accessible to academic and other researchers, for instance, closed corporate collections. The lack of available archives is even more pronounced in the Global South where resources for archival preservation are often scarce. Sheth’s insightful talk demonstrated how oral history was essential to studying, enlivening, and giving voice to marginalized historical actors. He did this with a case study on race and gender discrimination in Bollywood. Sheth demonstrated how Bollywood, for decades has privileged lighter-skinned actors on screen, as well as presented women in submissive and stereotypical ways, and lacking in agency. He highlighted a disturbing tendency in Bollywood to have long scenes of men dominating women, including gratuitous rape scenes. Sheth then presented video oral history snippets of Bollywood women actors who explained how they condemned and strongly resisted this practice.

This recovery of otherwise lost voices rang true to me, and it is exactly why at the Babbage Institute we have aimed at doing more interviews with women programmers, systems analysts, entrepreneurs, and computer users over the past two decades. This includes a project past CBI Director Thomas Misa did for the Sloan Foundation on women programmers and one I did on women entrepreneurs in software and services. Similarly, to HBS “Creating Emerging Markets,” we often use these oral histories, along with available archival and other research, in our publications. More importantly, these oral histories become resources for others, especially academics and graduate students, to use. As both a research institute and the leading archive for the history of the digital world, CBI oral histories open up rich archival collection development opportunities.
A difference between CBI relative to “Creating Emerging Markets” is we use video or audio recordings only to create transcripts that are edited for accuracy by interviewees and interviewers. This extends from our research infrastructure mission. It might reduce the educational use of the interviews (though there are some, as we see syllabi online assigning them and daily spikes in usage). “Creating Emerging Markets,” likewise, produces edited transcripts for research, but also creates professional video and snippets. This enables HBS’ collection to facilitate tremendous educational opportunities, which fits HBS so well, a school that sets the standard and influences business and management education throughout the world like no other. HBS does this through its exceptional faculty and its unparalleled (both in size and quality) 50,000 HBS-published case studies.
For those interested in this work on the history of Bollywood I would highly recommend Sudev Sheth, Geoffrey Jones, and Morgan Spencer’s compelling article, “Emboldening and Contesting Gender and Skin Color Stereotypes in the Film Industry in India, 1947-1991.” This was published in the Autumn 2021 (95:3) issue of Harvard Business Review.
The second paper of the opening session, while quite different, was equally engaging and spoke to issues of preservation. Vice President for Communication of Sweden’s Centre for Business History Anders Sjöman spoke on the model of his nonprofit organization. The Centre for Business History (CBH) is a large nonprofit with a wholly owned subsidiary services company. It provides archival storage, processing, editorial, historical analysis, writing/publishing, and related services to companies in Sweden. The CBH has hundreds of clients, and they include globally recognized names such as IKEA and Ericsson. The nonprofit is owned by its members, and the companies it works with to meet their history and archives needs. At 85,000 linear meters of documents, photographs, film, and video, it is one of the largest business archives in the world.
In the 1990s, as a doctoral student in history specializing in business, labor, and technology, I was introduced to and found organizational history and consulting fascinating. At the time, I worked as a researcher/historian for The Winthrop Group, a prominent US-based business history and archives consultancy located in Cambridge, Massachusetts. For three years I helped to research and assist with writing a major Harvard Business School Press book on the Timken Company (Pruitt w/ Yost, 1998). Since then, in addition to my faculty and center director post, I have continued to do a bit of historical consulting for business and nonprofit organizations. I like the idea of a nonprofit providing archival and historical preservation and analysis services, where the companies using the services are members and co-owners. This is a model that could work well for other countries and regions of the world.
The final paper of the opening session was by HBS’ Director of Architecture and Data Platforms Brent Benson and Maxim Harrell. They discussed using augmented retrieval generation—setting limits and parameters to HBS’ “Creating Emerging Markets” resources/data—as tools to better facilitate the usage of the collections and content generation. While, generally, I am extremely skeptical and critical of generative AI (as it generates based on data without understanding the data’s contexts) and believe it introduces a host of social bias-amplifying risks and realities, I found this discussion of augmented retrieval generation interesting. I was impressed that in the presentation, and their responses to questions, the presenters acknowledged risks and were carefully considering them in developing and refining these tools to augment search and use of “Creating Emerging Markets” oral history data. Regardless, this engaging work sparked rich discussion concerning new automation tools versus or in conjunction with traditional metadata of descriptors, abstracts, and finding aids, along with Boolean search.
Oral History as a Research Tool
The second session was on “Employing Oral History in Research.” It began with Lansberg Gersick Advisors (LGA) and Harvard adjunct Devin Deciantis’ discussion of the continuity of family enterprises in high-risk environments, focusing on his work and research in Venezuela. Oral history, documentation, and learning from the past can be helpful to meeting the challenges of generational transition of family businesses. In risky environments such as that of Venezuela with its soaring homicide rate, this can be all the more daunting, and all the more important given resources are scarcer. Deciantis is a particularly gifted and thoughtful orator, and this was an incredible talk. University of Texas A&M San Antonio historian Zhaojin Zeng next gave an impressive paper on the politics of doing archival research and oral history in China, indicating there are both significant possibilities and challenges. People openly dissenting on record from the Chinese Government’s standard or preferred perspective on issues and history can face risks. Finally, Meg Rithmire, a political scientist at HBS, sparked a discussion on the differences between interviews in social science versus oral history. This includes terminology and practice with gathering, saving, processing/editing, and making interview data publicly available.
Oral History of Gender and Business in the Global South
With my strong interest in social history, and especially gender history and gender studies in business, I found the three papers on “Gender Dynamics through Oral Histories in Business” especially intriguing. The oral histories for the “Creating Emerging Market” Project have generally been conducted in-person—interviewer, interviewee, and a film crew (though experimentation with enhanced video conferencing tools is also underway with the project).
The scholar who has done more in-person oral histories for the project than anyone in the Global South, Andrea Lluch, gave two brilliant talks in different sessions of the event. Lluch holds professorships at the School of Management, University of Los Andes, Columbia, and at the National University of La Pampa in Argentina. Additionally, she is a Researcher at the National Council of Scientific and Technical Research of Argentina. Her research and oral histories focusing on women entrepreneurs span both Argentina and Columbia. Among the various lessons, she emphasized are the importance and learning possibilities from interviewing throughout the hierarchies of organizations, providing an example of drug cartels.
Rounding out this session were outstanding papers by standout business historians Paula de la Cruz-Fernández (University of Florida) and Gabriela Recio (Business History Group, Mexico City). Fernández’s talk focused on capturing circumstance and historical contingency in exploring one woman entrepreneur in Miami, Florida, in “Estoy, in Right Place,” while Recio explored several women leaders’ experiences as they tried to navigate the male-dominated business culture.
In a different setting, but with the same principle of navigating organizational hierarchies and gaining understanding, just over a decade ago, I had the opportunity to do a sponsored research project with Thomas Misa. This project examined the impact of National Science Foundation Cyberinfrastructure (FastLane & internal e-Jacket) on the scientific and engineering research community it serves, as well as NSF as an organization. It was an oral history-based research project as few documents existed. We conducted in-person oral histories with four hundred people at 29 universities (vice presidents for research, deans, PIs, sponsored research administrators, and department administrators) and at NSF (top leadership, division directors, program officers, administrative assistants, and clerical staff). Our understanding of universities, NSF, and social (race and gender) and research impacts were so thoroughly enhanced by getting the perspectives of those at the lower levels of organizational hierarchies. We published a book from this research, FastLane: Managing Science in the Internet World (Johns Hopkins University Press, 2015).
Sessions four and six of the HBS conference explored “Oral History and the Creation of Knowledge” in two large regions, Africa, and Latin America, respectively. These sessions served as broad reminders of how archival infrastructure is sparser in some regions of the world, and thus, the critical role oral history and ethnography can play in understanding business and culture is heightened.
Political scientist and Vice Dean of Education at the University of Toronto Antoinette Handley provided an insightful talk on “African Business Elites: Changing Perceptions of Power.” Laurent Beduneua-Wang offered an intriguing examination of “Ethnography and Oral History in Morocco.” While HBS’ Marlous van Waijenburg spoke broadly on “Capturing African Histories and Experiences,” a talk which by admission and intent was not so much on oral history. Instead, it was an expert high-level overview of issues in African history and historiography and thus provided rich context.
On the topic of Latin America, Andrea Lluch gave another terrific paper, “Managing Business in Unstable Contexts,” where she focused on the instability of currency in Argentina and how adjustments in the timing of purchases, planning, and salary increases were used by consumers and producers to cope with the seemingly untenable circumstances of frequent hyperinflation. Marcelo Bucheli a business historian and political economy specialist at the University of Illinois, in turn, spoke on “Political Uncertainties in Emerging Markets” in Latin America. Finally, Miguel A. Lopez-Morell (University of Murcia) presented a co-written paper he did with business historian Bernardo Batiz-Lazo (Northumbria University) entitled, “Digital Conquistadors,” contributing strongly to our understanding of history and historiography of banking in Latin America.
Oral History and Pedagogy
The penultimate session, number five, “Employing Oral History in Teaching” was a particularly important one to the event. Tarun Khanna convened this session. In general, oral history is grossly underutilized in undergraduate and graduate education, as well as high school—both from the standpoint of drawing on oral histories to instruct, as well as teaching the skills and techniques to conduct oral history well. This session explored various strategies for utilizing and enhancing skills for oral history in education. Chinmay Tumbe of Indian Institute of Management spoke on “Best Practices for Oral History Teaching and Collection Development.” Next Sudev Sheth offered a paper entitled “Teaching Global Leadership,” and how oral history can be a helpful tool. Lastly, Arianne Evans of Harvard’s Graduate School of Education presented “Incorporating Oral History into High School Pedagogy.
“Creating Emerging Markets”
From start to finish, this was a well-designed event where themes of methods, gender, geographies, research uses, and educational uses all came together to provide a coherent and vibrant whole. Discussions following the papers after each of the six sessions, as well as during the breaks and meals, were lively. We also all received a copy of Geoffrey Jones and Tarun Khanna’s Leadership to Last: How Great Leaders Leave Legacies Behind. It is a 330-page book with a paragraph-long biography on each interviewee, and then edited passages from forty-four interviewees selected from the “Creating Emerging Markets” Oral History Project.

This book serves as both a source of management lessons from the past as well as a glimpse into a significant project and its resources. For business historians and social science researchers, these passages are short and make a reader wonder about the context of the whole interview. That said, the interviewer questions (from Khanna and other researchers) are well-constructed to elicit insightful responses. Without question, it leaves you wanting to read more. While this might seem a liability, it is not. I am certain one of Jones’ and Khanna’s goals is to bring attention to the larger project and its resources, and for me, it did just that. Further, I especially like the choices of the seven themes/chapters, each with five to eight interviewees apiece:
- Managing Families
- Committing to Values
- Innovating for Impact
- Contesting Corruption
- Challenging Gender Stereotypes
- Promoting Inclusion
- Creating Value Responsibility
In reading this terrific book, as well as visiting other select oral histories on the “Creating Emerging Markets” website, the extensive research preparation of the interviewers stands out. In our program at CBI, modeled on leading programs at the University of California, Berkeley, and Columbia University, this continues to be our practice as well. Our extensive preparation is what produces four decades of what we have long termed “research-grade” oral histories. We believe this extensive commitment to research preparation and having appropriate, well-trained, and prepared interviewers is key to our success with securing and delivering upon oral history sponsored research project support from NSF, Sloan, NEH, DOE, ACM, IBM, and others.
“Creating Emerging Markets” is producing similar research infrastructure and resources, as they also invest in video and snippets which add so much to visibility and educational use— “research and educational-grade interviews” perhaps?
Public Goods
While we are not doing the production-quality video at CBI, I serve on the History Committee of the leading professional organization for computer science the Association for Computing Machinery (ACM), and for that organization we are doing production video oral histories with Turing Award winners (equivalent to the Nobel Prize for computer science) and video snippets from each of these. This undoubtedly makes them more useful in computer science education, as well as the history of computing and software. With “Creating Emerging Markets” the interviews are rich for research and education in history, sociology, management, international studies, African studies, Latin American Studies, Asian Studies, and other disciplines.
A term Geoffrey Jones used repeatedly at the conference is that their “Creating Emerging Markets” Project is producing “public goods.” I would agree that the term makes sense and is fitting for a business school and business history (I have an MBA in addition to a Ph.D. in history). When I have my CBI and historian, humanities, and social science hats on, it is what I might call public resources or historical research infrastructure—that is how I commonly speak about CBI’s Oral History Program (and archives) and the public resources it produces (full transcripts of many hundreds of two-to-sixteen-hour interviews and 320 extensively processed archival collections). “Public goods” works very well for the “Creating Emerging Markets” Oral History Project as the video snippets are engaging for audiences in education, corporate education, and the wider public who have an interest.
In its dozen years “Creating Emerging Markets” has produced 181 oral histories with interviews from thirty-four countries and its site gets 80,000 hits per year at present. The interviews on the Global South span such topics as leadership, governance, business education, family business, race, gender, political economy, and more. A link to “Creating Emerging Markets” is in the bibliography, and I highly recommend visiting this incredible resource. For historians of business and management scholars of the Global South, it is tremendously valuable. And given the well-selected video snippets, it will appeal broadly.
Oral history theory, practice, collections, and publications remain rare in emerging markets. Existing projects and the surrounding literature on both interviews and methods/techniques tend to focus more on politics and culture than business and enterprise. For instance, there are such books as David Carey Jr.’s Oral History in Latin America: Unlocking the Spoken Narrative, Anindya Raychaudhuri’s Narrating South Asian Partition: Oral History, Hassimi Oumarou Maïga’s Balancing Written History with Oral Tradition: The Legacy of the Songhoy People, and Patai Daphne. Brazilian Women Speak: Contemporary Life Stories. As such “Creating Emerging Markets,” importantly, is filling an important void, as it also does much more.
“Creating Emerging Markets” is creatively and expertly building new possibilities in the business oral history space and, importantly, intersecting with the targeted geography of the Global South. Through its use of experienced and diverse business historians, management scholars, and social scientists as interviewers, through the video teams and well-selected snippets to highlight, through the development of enhanced configurable and automated searching and generation techniques, and through the edited transcripts it is innovating in methods, tools, and thematic and geographic targets in ways that strongly advance research and education opportunities in understanding emerging markets and their political economy and cultural milieu.
HBS’ “Oral History and Business in the Global South,” was such an intellectually delightful event. It worked terrifically as an engaging conference on its own and also as—along with their 2022 book Leadership to Last—a coming out party for an extremely important project, “Creating Emerging Markets,” that is developing and making public wonderful research and educational resources with enormous possibilities on parts of the world and leaders and workers who contribute so much and too often are ignored. The quality of the scholarship presented, the quality and importance of the “Creating Emerging Markets” Project, and the intellectual generosity of the conveners, presenters, and audience made it special to be part of this lively and vital conversation.
HBS’ Baker Library and Its Historical Research Gems
All the events were at HBS’ Chao Center, including a reception and catered Indian cuisine dinner after the first day of the conference. About half of the two dozen or so presenters and attendees took part in a fantastic tour of Baker Library led by Laura Linard. The sublime special collections include unique archives—corporate records, annual reports, ledgers, personal papers—extensive rare books, an art collection, and much more. The records on business history date back to the 14th century, with amazing materials in that century and each one since. The thoughtfully presented display cases, elegant reading room, and art throughout the public areas of Baker Library add much to the intellectual and aesthetic allure.
In 1819 the Ecole Spéciale de Commerce et d’Industrie (now the ESCP Business School), was established in Paris. It often is credited as the first business school in the world. Harvard Business School, established in 1908, has the distinction as the first to introduce the standard Master of Business Administration (MBA) degree. Through its incredible faculty, its methods, and its unequaled published series of case studies that are used far more than any other series in business schools globally, Harvard has an outsized impact on business and management education and educating and influencing future leaders in the U.S. and around the world. The Baker Library documents this institutional history, as it also documents business history globally.
We took the elevators down to the closed stacks and saw where the Alfred D. Chandler Collection and other archival collections are held. Of particular interest to computer historians, the library has the personal/business papers of legendary Digital Equipment Corporation founder and longtime leader Kenneth Harry Olsen. Laura, in addition to giving us the group tour, was kind enough to meet with me for an hour afterward. We discussed archives and history and the respective missions of our institutions, exploring if there were areas where we might be helpful to one another in collecting. It was a wonderful discussion and the perfect ending to two engaging and intellectually exciting days at Harvard Business School.
Bibliography
Carey, David Jr. (2021). Oral History in Latin America: Unlocking the Spoken Narrative. (Routledge).
“Charles Babbage Institute for Computing, Information and Culture Oral History Program and Resources.” CBI Oral Histories | College of Science and Engineering (umn.edu)
“Creating Emerging Markets,” Harvard Business School. “Creating Emerging Markets” - Harvard Business School (hbs.edu)
Jones, Geoffrey, and Tarun Khanna. (2022). Leadership to Last: How Great Leaders Leave Legacy’s Behind. (Penguin Business).
Maïga, Hassimi Oumarou. (2012). Balancing Written History with Oral Tradition: The Legacy of the Songhoy People (Routledge).
Misa, Thomas J., and Jeffrey R. Yost. (2015). FastLane: Managing Science in the Internet World. (Johns Hopkins University Press).
Patai, Daphne (1988). Brazilian Women Speak: Contemporary Life Stories. (Rutgers University Press).
Pruitt, Bettye, with the assistance of Jeffrey R. Yost. (1998) Timken: From Missouri to Mars—A Half Century of Leadership in Manufacturing. (Harvard Business School Press).
Raychaudhuri, Anindya. (2019) Narrating South Asian Partition: Oral History (Oxford University Press).
Sheth, Sudev, Geoffrey Jones and Morgan Spencer. (2021). Business History Review 95:3 (Autumn): 483-515.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry (MIT Press).
Yost, Jeffrey R (April 2024). “Harvard Business School’s “Oral History and Business in the Global South”: A Review Essay and Reflection.” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 38-48.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press and is PI of the new CBI NSF grant "Mining a Useful Past: Perspectives, Paradoxes and Possibilities in Security and Privacy." He is author of Making IT Work: A History of the Computer Services Industry (MIT Press), as well as seven other books, dozens of articles, and has led or co-led ten sponsored history projects, for NSF, Sloan, DOE, ACM, IBM etc., totaling more than $2.3 million, and conducted/published hundreds of oral histories. He serves on committees for NAE, ACM, and IEEE, and on multiple journal editorial boards.

Management thinkers tend to fade from popularity when a new management trend arrives. These new practices appear and are followed with endless rounds of meetings, new buzzwords for the office cronies, and extra work that ends up driving employees crazy or their company out of business. But amidst all the changes in management styles, one voice kept its calm, soothing tone, with no buzzwords just firm guidance, through six decades. That voice was Peter Drucker’s.
For Drucker, management is a social function and a liberal art (Drucker 1989). He was amazed at the speed with which management emerged and became an integral part of society. In 1989 he reflected on the institution of management and how rarely in human history other institutions have been so transformative, so quickly, that in less than 150 years they have created a new world order and provided a new set of rules (Drucker 1989).
Drucker stayed relevant and was part of every manager’s vocabulary through his extensive writing and strong presence in mass media. He was the author of 39 books, translated into 36 different languages (O’Toole 2023), and he also wrote many columns for the Wall Street Journal, Harvard Business Review, The Atlantic Monthly and The Economist. In his management books, he consistently discussed, not just the function of managing, but also its social impact. It was clear to Drucker that management was so pervasive as a social function, that it became its Achille’s heel. He wanted to bring to the attention of managers that management must be accountable, to point out the sources of management’s power, and its legitimacy (Drucker 1989).
The goal of this essay is to document the enduring legacy of Peter Drucker for management theory. We aim to evaluate his clarity of vision, especially regarding the information revolution he foresaw very early, before computers were fully adopted into organizations. We use primary and secondary sources to assess how we are confronting and overcoming the management challenges that Drucker foresaw for this new century. First, we present Peter Drucker, the person, then we discuss the historical validity of using the term “guru” when referring to Drucker. Next, we present a review of Chapter 4, “Information Challenges” of his 1999 book, Management Challenges for the 21st Century to evaluate his incisive commentary regarding the impact of information, information systems and information technology in institutions and society. Finally, we conclude with a short discussion about where the Information Revolution has taken us, as of 2023.

Peter Drucker, The Person
Peter Ferdinand Drucker (November 19, 1909 – November 11, 2005) was born in Vienna under the Austro-Hungarian Empire, where his father, Adolf Drucker, was a high-level civil service lawyer. His household was a place where intellectuals and high government officials gathered regularly, among them Joseph Schumpeter (Austrian-born, American economist and Harvard University Professor), and his uncle Hans Kelsen (Austrian jurist and philosopher). At eighteen, after finishing his studies at Döbling Gymnasium, and having difficulties finding a job in post-World War I Vienna, he decided to move to Hamburg, Germany
Drucker became an apprentice in a cotton trading company. He started writing for the Austrian Economist (Der Österreichische Volkswirt) which gave him some credentials and the confidence to pursue a career as a writer. Shortly after, he moved to Frankfurt to work at a daily newspaper. He enrolled at the University of Frankfurt where he earned a doctorate in International and Public Law at the age of 22.
He remained in Frankfurt for two more years; the rise to prominence of National Socialism and Hitler were events that caused him to leave. Some Austrians were becoming infatuated by the transformation of Germany; Drucker was not one of them. He decided to go to London in 1933, where he worked first for an insurance company and then as chief economist for a private bank. While in London, he married an old friend from the University of Frankfurt, Doris Schmitz. Together they emigrated to the United States in 1937.
The Drucker’s lived in Bronxville, New York for five years. In 1942, they moved to Bennington, Vermont where Drucker held a position at Bennington College as a professor of politics and philosophy. In 1950 they moved again, this time to Montclair, New Jersey where Drucker became a professor of management at New York University, a position he held for 20 years. In 1971, he became the Clarke Professor of Social Science and Management at Claremont Graduate School (Now Claremont Graduate University), a position he maintained until his death. He was instrumental in the development of one of the United States’ first executive MBAs for working professionals. Marciariello, in writing about Drucker’s driving force, recognizes the vast influence history, and his own early life experiences in Austria and Germany during the World Wars, had on his thinking to empower citizens of free societies, “so they would never be tempted to turn to authoritarian substitutes” (Marciariello 2014, xviii).
Peter Drucker, The “Guru”
Recognizing that “Guru” is an appropriate term from Hinduism, it has now lost its power in management studies where influencers may now be used. Historically, the term has a very precise place when discussing Drucker. As historians, we bring it forward in this essay, fully recognizing that for today’s readers it may have unsavory connotations, and it does not fully comply with our current ethical values. We hope that readers may see the value of its usage without compromising any aspect of equity, diversity, and inclusion.
Drucker is considered one of the most influential management thinkers ever. He was called the “leading management guru” (Harris 2005) and an “uber-guru” (Wooldridge 2009) after his death, and was influential from the time his first book on management was published in 1954. Over the years, he authored more than 25 books that have impacted the shaping of the modern corporation. In the process, he established himself as someone who knew managers, not just theorized about them.
Brad Jackson (2001) observed the management “guru” fad since 1994, originally, as a manager seeking help and guidance, later as an adult educator facilitating access to lectures by “gurus” and finally as a researcher seeking to add to the debate about “guru” theory. He identifies the “guru” fad as a phenomenon that began during the 1980s in the United States but then expanded throughout the rest of the world. He also documents the backlash during the mid-1990s against the “gurus” themselves, but mostly against management fads or fashions. He identifies rhetoric as the key element that ‘makes’ a management “guru.” He also invites his readers to deconstruct this rhetoric and demystify the ‘doctrines of salvation’ offered by those “gurus.”
From Jackson’s analysis and critique, we see Drucker’s rhetoric not as a ‘doctrine of salvation’ but as a conversation that is open, full of possibilities, emerging from his own experiences. Drucker practiced management. He was a keen observer of his actions and was able to write about them clearly and convincingly. We can think of his writing as a report from the field. When he wrote about Ford Motor Co., Sears Roebuck, or IBM (Drucker 1954), he was, in essence, writing business history and inviting his readers to witness history with him. Then, after demonstrating the practice of management in these world-renowned organizations, he invited us to consider the social impact of management decisions.
Perhaps the aspect that resonated the most with his readers was his absolute conviction that freedom and liberty are invaluable and not always guaranteed. Something he wanted all of us never to forget.
“In a free society the citizen is a loyal member of many institutions; and none can claim him entirely or alone. In this pluralism lies its strength and freedom. If the enterprise ever forgets this, society will retaliate by making its own supreme institution, the state, omnipotent” (Drucker 1954, 460).
This was so important for Drucker that Jim Collins, popular business consultant and author of Built to Last: Successful Habits of Visionary Companies (2004), when addressing participants at the Drucker Centennial on November 2, 2009, declared, “Drucker contributed more to the triumph of freedom and free society over totalitarianism than anyone in the 20th century, including perhaps Winston Churchill” (Collins 2009). His legacy is well recognized, not only by those who worked closest to him (The Claremont Graduate University named their business school after Drucker) but all over the world. Because of this, people have been joining the Peter Drucker Institute which was created in 2006 as an extension of the Drucker Archives with the mission of “Strengthening Organizations to Strengthen Society”. They convene annually to continue Drucker’s conversations. They not only want to look back to Drucker’s writings, but they also want to look forward to new ideas that can be generated out of his work. In many ways, that is in essence what gurus are, guiding lights into the future. A testament to many, Drucker is the greatest source of managerial wisdom.

Management Challenges for the 21st Century
At age 90, beyond the expected lifespan of most people, Drucker published his 34th book. In his lifetime, he witnessed an enormous transformation not only in Europe, but all over the world: two World Wars, the rise and fall of the Soviet Union, the construction and fall of the Berlin Wall, the Cold War, the emergence of China, and the ossification of many American institutions. He knew that he needed to warn his readers about what was coming in the new millennium. He started the conversation at the beginning of the 1990s with The New Realities (1989) and decided to continue the conversation regarding the challenges management would face in the new century with Management Challenges for the 21st Century (1999).
Drucker sought to discuss the new paradigms of management and how these paradigms had adapted and changed toward the end of the 20th century. He wanted us to change our basic assumptions about the practice and principles of management. He concentrated on explaining the new information revolution by discussing the information executives need and the information executives owe. He brought the discussion of the knowledge worker productivity to the front and center.
Information Challenges
Drucker starts Chapter 4 of his 1999 book with a declaration:
“A new Information Revolution is well underway. It has started in business enterprise, and with business information. But it will surely engulf ALL institutions of society. It will radically change the MEANING of information for both enterprises and individuals.”
This declaration identifies two issues that we can use to check how accurate he was in his predictions. The first issue is Drucker’s claim that the information revolution will affect ALL institutions of society, and the second is about the changing MEANING of information.
Issue I: Institutions
If we were to look now for institutions that have not been affected by the information revolution, we would be hard pressed to identify one. Governments all over the world now have an online presence that was not in evident at the end of the 20th century, even in countries with small populations like Tuvalu, Palau, San Marino, Liechtenstein, and Monaco; as well as those countries with the smallest GDP, including Syria, Tuvalu, Nauru, Kiribati, and Palau. Investigations of the impact of information technology on governments have ranged from the potential of globalization facilitated by information technology and calling into question the very existence of the nation state (Turner 2016), to the influence of e-commerce on the ability of governments to raise tax revenue (Goel and Nelson 2012), and the threat to critical national infrastructures from cyber threats (Geers 2009). And more recently, how social media may facilitate social upheaval and revolution such as was experienced during the Arab Spring (Bruns, Highfield and Burgess 2013).
Businesses have been not only transformed, but also created and help bring about this Information Revolution (Google, Facebook -now Meta, Twitter -now X, Zoom, etc.) Businesses are now more prone to be impacted by cyber security threats (Kshetri 2009). Educational institutions have been transformed with massive open online courses (MOOCs). Online learning has increased to be the way most institutions communicate with their students; assignments are delivered through Learning Management Systems; and recruitment, application, and acceptance are all web-based. Banking services have moved online as people can even deposit cheques by simply taking a photograph within their banking app. Legal institutions have developed online resources that include among others the creation of Blawgs (internet-based blogs dealing with topics related to Law), and other resource repositories. The mass media has been particularly impacted. The demise of newspapers and the emergence of disinformation with ‘fake news’ or ‘alternative facts’ are just two examples of how much the information revolution has transformed these institutions.
Health Institutions have been forced to transform to deal with the explosion of medical information available on the web. Popular sites include PatientsLikeMe.com, and a multitude of communities around specific diseases: WeAreLupus.org, WeAreFibro.org (for fibromyalgia), WeAreHD.org (Huntington’s disease), etc. have switched over.
The last group of institutions we discuss is the military, obviously a group for which it is more difficult to get information since they like to keep it a secret as part of their raison d’être. At the same time, military organizations are part of the government, and as such they have a mandate to have a presence online. The US Army has its own domain (.mil), due to its heavy involvement in the development of internet technologies through partnering with research centers and universities.
Because Drucker focused on the wider societal impact of management, we are also able to examine how the information revolution has affected personal relationships through platforms like Facebook (over 3 billion users), WhatsApp (2 billion users), Instagram (1.4 billion), TikTok (1 billion), MeetUp (60 million), and others for dating and courtship. According to Cesar (2016), online dating services generate $2 billion in revenue per year in the US alone and expand at an annual rate of 5% per year between 2010 and 2015. In 2016, a study by the Pew Research Centre (Smith 2016) revealed that 15% of American adults reported using online dating sites and/or mobile dating apps and, further, that 80% of Americans who had used online dating agreed that it was a good way to meet people.
The study’s author noted that growth in the use of online dating over the previous three years had been especially pronounced for two groups that had not historically used online dating – the youngest adults (18 to 24 years of age) and those in the late 50s and early 60s. Some of the increase in use is credited to the use of gamification by apps such as Tinder, which makes choosing a ‘dating partner’ as easy as swiping to the left or right on a mobile device. Tinder now operating in 196 countries, reports its users make an average of 1.6 billion swipes per day (Tinder 2024). Lest, we think the only impact has been positive, recent academic research has shown the negative impact of partner phubbing (phone snubbing) on depression, relationship satisfaction and personal well-being (Roberts and David 2016).
It is clear from the preceding discussion that Drucker was very accurate in his declaration regarding ALL institutions being impacted by the information revolution.

Issue II: Meaning
A less obvious analysis regarding the MEANING of information is possible only by pointing out what the meaning of information was before the revolution started. Meaning is essentially a concern of Semiotics. One of the greatest semioticians of all time, Umberto Eco, was a proponent of information as a way of being human. In his essays Open Work (1962), Minimal Diary (1963) and Apocalyptic and Integrated (1964), he talks about a dichotomy between optimistic and pessimistic intellectuals who either reject or accept mass culture.
Eco finds the term information to be an ambiguous one. What interested Eco about information theory is the principle that the quantity of information (as opposed to the meaning) contained in a message is in inverse proportion to its probability or predictability, something that to him resembled the effect of art, particularly modern art. Thus, Eco argues, art in general may be conveying a much higher degree of meaning than more conventional kinds of communication. He argues that:
“…the quantity of information conveyed by a message also depends on its source … information, being essentially additive, depends on its value on both originality and improbability. How can this be reconciled with the fact that, on the contrary, the more meaningful a message, the more probable and the more predictable its structure?” (Eco 1989, 52).
It was clear enough for Eco that “meaning and information are one and the same thing” (Eco 1989, 53). He is puzzled by the practicality under which the new information revolution started. The intellectuals behind the new information age, in a very myopic, practical way created the Millennium Bug (or the Y2K Bug) not due to ignorance, but out of lack of vision. Eco was puzzled by programmers who didn’t anticipate the date problem 50 years down the road, but the Millennium Bug was a business decision to keep costs down when digital storage space was highly expensive.
Eco’s sense of amazement is also about the greatest repository ever built, the World Wide Web. He is especially concerned about how digital information is never forgotten, allowing us to re-interpret it, constantly, giving it a sense of fluid meaning, until we reach information overload: “To my mind, to have 14 million websites is the same as having none, because I am not able to select” (Eco et al. 1999, 192).
That is a good way to start to see those who immediately accepted the information revolution, a.k.a. the information age, and those who dismissed it while continuing their existence without questioning the impact that information would have on their lives. Among the ‘integrated,’ John Seely Brown, along with Paul Duguid, published The Social Life of Information (2000) to document the meaning of information in our social life. They start by reminding us of how, at one time, the lack of information was a problem: “It now seems a curiously innocent time, though not that long ago, when the lack of information appeared to be one of society’s fundamental problems” (Brown & Duguid 2000, 12).
They document how chronic information shortages threatened work, education, research, innovation, and economic decision-making. They remind us throughout the book that, “we all needed more information” as well as how “for many people famine has quickly turned to glut.” This is a good indication that the meaning of information has changed for most of us.
Issue III: Concepts
Drucker also indicates that the information revolution is “a revolution in CONCEPTS” (Drucker 1999, 97); by this he meant that it was not about Information Technologies (IT), Management Information Systems (MIS), or Chief Information Officers (CIOs). He accused the information industry—the IT people, the MIS people, the CIOs—of failing to provide information by being more interested in the technology than the information side of their mandate.
“So far, for fifty years, Information Technology has centered on DATA—their collection, storage, transmission, presentation. It has focused on the ‘T’ in ‘IT.’ The new information revolutions focus on the ‘I.’ … What is the MEANING of information and its PURPOSE?” (Drucker 1999, 97).
He was correct on some issues but missed one of the biggest transformations of this industry: going from IT to information and communication technology (ICT) with the arrival of Apple’s iPhone in 2007. Indeed, people were not looking for information when buying an iPhone, but this piece of technology redefined our interaction with technology, from keyboard-enabled communication to a freer way to get what we want, anywhere, anytime. Not only that, but this device was also so revolutionary that it reshaped many industries. The mobile phone giants, Nokia, Blackberry, Samsung, etc., were not prepared, and their businesses were completely changed. Digital cameras have almost disappeared since the smartphone allows us to always have a camera with us. Personal Digital Assistants (PDAs), launched in 1996, were required to supplement the capabilities of mobile phones, but disappeared as well since they were no longer needed when carrying a smartphone. This led to the demise of world leader, Palm Inc. Even Apple’s own iPod was cannibalized by the iPhone since people could now use their phones to listen to their iTunes collections, and after the rise of Spotify, streaming music is the new way to listen to it.
Historically speaking, the information industry needed to concentrate on technology first since it emerged in a world of analog technologies. To go fully digital, a completely new set of skills was needed. Since everything was new, there were several options tested, and if companies jumped too early into one, they ran the risk of becoming obsolete very quickly. The race for the personal computer left many big players, including IBM, with serious losses. In those fifty years that Drucker mentions in his statement, the world went from very expensive room-sized mainframe computers to affordable portable laptops; from magnetic tapes to punched cards to 5¼” and 3½” floppy disks; from 2,000 to 760 million instructions-per- second. The industry needed to be focused on technology until the technology was good enough to deliver that sense of instant connectivity, 24/7, anywhere. Therefore, businesses needed to play a game of either wait-and-see or jump-on-board into an unknown future. Following Rogers’ (1962) Diffusion of Innovation Theory: Innovators à Early Adopters à Early Majority à Late Majority à Laggards.
The aspect that Drucker wanted to highlight in his assessment is that users, especially executive users, needed to engage in the information revolution not from a perspective of being seduced by new tools, but by being able to do things that were not possible before without these technologies. That is why he stressed the importance of concepts, concepts that would emerge only by questioning what those users were doing, “top management people during the last few years have begun to ask, ‘What information concepts do we need for our tasks?’” (Drucker 1999, 100).
In answering that question, we have seen the transformation of business computing purpose from Decision Support Systems (DSS) to Executive Information Systems (EIS), to Business Intelligence (BI), to the current Business Analytics (BA). Also, the role of data as a by-product of Transaction Processing Systems (TPS), has transformed into data as an asset, and data as the new oil. Are these new concepts or just slogans?
To grasp the sense in which Drucker used the term ‘concept’ in his analysis of the information revolution, perhaps is important to remember that he taught philosophy at the beginning of his career as an academic at Bennington College. In his autobiography, Druker writes about Karl and Michael Polanyi, and even though he does not mention Ludwig Wittgenstein specifically, Wittgenstein was living in Vienna at the same time as Peter Drucker was. This time has been documented as the first Vienna Circle, a time when many revolutionary new ideas in philosophy were emerging. More than an etymological definition we need an epistemological one of ‘concept’ to capture the sense that Drucker may have had when writing about informational challenges.

Peacocke (1996) discusses “concepts” as a way of thinking about something. One object in particular, a given property, a relation, perhaps an entity. He emphasizes that concepts have to be distinguished from stereotypes. He reminds us that while the theory of concepts is part of the theory of thought and epistemology, the theory of objects is part of metaphysics and ontology. “…a theory of concept is unacceptable if it gives no account of how the concept is capable of picking out the objects it evidently does pick out” (Peacocke 1996, 74).
Even though Drucker is writing about the information revolution at the end of the 20th century, he was very clear, since The Practice of Management, that information, to him, is “the tool of the manager” (Drucker 1954, 412). He explains what that concept means to him through the way he explains what he means by listing the collection of ‘objects’ he picks out:
“The manager has a specific tool: information. He does not ‘handle’ people; he motivates, guides, and organizes people to do their own work. His tool – his only tool – to do all this is the spoken or written word or the language of numbers. No matter whether the manager’s job is in engineering, accounting, or selling, his effectiveness depends on his ability to listen and to read on his ability to speak and to write. He needs skill in getting his thinking across to other people as well as skill in finding out what other people are after … must understand the meaning of the old definition of rhetoric as ‘the art which draws men’s hearts to the love of true knowledge’” (Drucker 1954, 413).
The choices are clear. For Drucker, from the time he started writing about the practice of management up to his discussion of the information revolution, reading, writing and arithmetic, and rhetoric, are the concepts that make information useful to managers. After nearly fifty years, he recognizes that not much has changed for executives:
“In business across the board, information technology has had an obvious impact. But until now that impact has been only on concrete elements—not intangibles like strategy and innovation. Thus, for the CEO, new information has had little impact on how he or she makes decisions. This is going to have to change” (Drucker 2002, 83).
Perhaps for no one else but Drucker, at the dawn of the 21st century it was clear that data alone would not suffice to compete and win in the new economy. Drucker puts the onus on each knowledge worker by correctly pointing out that “no one can provide the information that knowledge workers and especially executives need, except knowledge workers and executives themselves” (Drucker 1999, 124).
Conclusion
Drucker was correct when in 1999 he warned us of the challenges we would be facing in the 21st century, especially those regarding the new information revolution. Looking at the three issues that he brought to our attention: Institutions, Meaning and Concepts, we can see that in the last 23 years, all of them have had a transformation due to the ICTs that have emerged (and continue to emerge). Institutions now have a digital reality that keeps them present to their constituents 24/7, anywhere and everywhere.
The meaning of information, especially the issues of information overload, misinformation and disinformation is a constant threat to institutions and people. Privacy and security online are now aspects that were formerly not present in the digital world of management. Willingly or unwillingly, we must recognize, for example, that X (formerly Twitter) is an essential tool to take the pulse of politics. How would the world of business survive the 2020 pandemic without the emergence of Zoom?
Can concepts help us retake control of the meaning of information? That is something that the new generation of managers will have to decide. The generation that has grown up digital is starting to take its place in the workforce, and we will have to wait and see if they take this control back to its rightful owners, the people.
Drucker had a view of society that no longer describes the society we live in. He was very optimistic about what individuals get by participating in social groups. His revolution was a gentle one offering many advantages not only to managers and organizations but to simple individuals. Something that is no longer true. The revolution we are witnessing is a harsher one, with huge inequalities generated between those with access to these technologies and those without; corporations that are valued by intangibles with little regulation; the emergence of Neoliberalism. A revolution biased against marginalized groups either by race, gender, disability, or access to education.
Looking at primary and secondary sources, we have opened a conversation regarding how organizations are confronting the challenges that Peter Drucker foresaw at the close of the 20th century. At least for us, from the perspective of the information revolution, we think that we are doing what is needed to conform to the demands of the new realities. But there is still much work to be done and having Drucker’s legacy in his writing is reassuring.
Bibliography
Bruns, A., T. Highfield and J.E. Burgess (2013). “The Arab Spring and social media audiences: English and Arabic Twitter users and their networks,” American Behavioral Scientist, 57, (7), 871-898.
Brown, J. S., and P. Duguid. (2000). The Social Life of Information (Harvard Business School Press).
Cesar, M. (2016). “Of Love and Money: The Rise of the Online Dating Industry,” http://www.nasdaq.com/article/of-love-and-money-the-rise-of-the-online-dating-industry-cm579616 (accessed February 3, 2024).
Collins, J. (2009). Address, available as a video at: www.druckerinstitute.com/peter-druckers-life-and-legacy/tributes-to-drucker (accessed January 19, 2024).
Drucker, P.F. (1954). The practice of management (Harper & Brothers).
Drucker, P.F. (1978). Adventures of a Bystander (Harper & Brothers).
Drucker, P.F. (1989). The New Realities: In Government and Politics, in Economics and Business, in Society and World View (Harper & Row).
Drucker, P.F. (1999). Management Challenges for the 21st Century (Harper-Business).
Drucker, P.F. (2002). Managing in the Next Society (Truman Talley Books – St. Martin’s Press).
Eco, U. (1989). The Open Work (Harvard University Press).
Eco, U., S.J. Gould, J.C. Carrière, and J. DeLumeau (1999). Conversations about the end of time (Penguin).
Geers, K. (2009). “The Cyber Threat to National Critical Infrastructures: Beyond Theory,” Information Security Journal: A Global Perspective, 18, (1), 1-7.
Goel, R.K. and M.A. Neilson (2012). “Internet Diffusion and Implications for Taxation: Evidence from U.S. Cigarette Sales, Internet & Policy, 4, (1), 1-15.
Harris, K. (2005). “Peter Drucker, leading management guru, dies at 95, 11 November,” available at: www.bloomberg.com/apps/news?pid=newsarchive (accessed January 15, 2024).
Jackson, B.G. (2001). Management Gurus and Management Fashions, London: Routledge.
Kshetri, N. (2009). “Positive Externality, Increasing Returns and the Rise in Cybercrimes,” Communications of the ACM, 52 (12), 141-144.
O’Toole, D. (2023). The Writings of Drucker, in The Daily Drucker, November 12, 2023, https://dailydrucker9.wordpress.com/ (accessed February 20, 2024).
Maciariello, J. A. (2014). A year with Peter Drucker (Harper-Business).
Peacocke, C. (1996). Concepts. In Dancy & Sosa (eds.) A companion to Epistemology, The Blackwell Companions to Philosophy (Blackwell).
Ramirez, A. (2020). “Of Bugs, Languages, and Business Models: A History,” Interfaces: Essays and Reviews in Computing and Culture, Vol. 1, Charles Babbage Institute, University of Minnesota, May 2020: 9-16.
Roberts, J.A. and M.E. David (2016). “My life has become a major distraction from my cell phone: Partner phubbing and relationship satisfaction among romantic partners,” Computers in Human Behavior, 54, 134-141.
Rogers, E. (1962). Diffusion of Innovations (Free Press).
Smith, A. (2016). 15% of American Adults Have Used Online Dating Sites or Mobile Dating Apps, 11 February, available at: http://www.pewinternet.org/2016/02/11/15-percent-of-american-adults-have-used-online-dating-sites-or-mobile-dating-apps/ (accessed February 3, 2024).
Tinder (2024). About Tinder, available at: https://www.gotinder.com/press?locale=en (accessed February 3, 2024).
Turner G. (2016). “The Nation-State and Media Globalisation: Has the Nation-State Returned — Or Did It Never Leave?” In Flew T., Iosifidis P., Steemers J. (eds) Global Media and NationalPolicies. (Palgrave Macmillan), 92-105.
Wooldridge, A. (2009). Peter Drucker: uber-guru, paper presented at the 1st Global Peter Drucker Forum, Vienna, 19-20 November, available at: www.druckersociety.at/repository/abstracts.pdf (accessed January 6, 2024).
Alex Ramirez (March 2024). “A Revolution: Drucker Saw it Coming.” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 23-37.
About the author: Alejandro Ramirez is an Associate Professor of Business Analytics and Information Systems at the Sprott School of Business – Carleton University in Ottawa, Ontario, Canada. He has a PhD in Management – Information Systems (Concordia), an MSc. in Operations Research & Industrial Engineering (Syracuse), and a BSc. In Physics (ITESM). He has been active with the Business History Division of ASAC since 2012 and has served as Division Chair and Division Editor. He is interested in the History and the stories of Information Systems in Organizations.
In the past five years, corporations have been pressured to stop just focusing on stockholders and instead also pay attention to the needs of other stakeholders: employees, communities, indeed entire societies. Many high-tech firms, and notably social media ones, are being criticized for letting fake facts proliferate threatening the very foundations of democracy and civic behavior, such as Facebook (Meta) and Twitter (X), while others are accused of treating their employees almost as if they worked in sweatshops, such as Amazon. They are not allowed to hide behind new names, such as Meta and X, and regulators are chasing them all over the world for their near monopolistic dominant positions or for treating one’s personal data in a cavalier fashion. So many IT firms are big enough that they actually affect political and social behavior across many countries. In short, they no longer are just companies within some vast economy, rather now pillars of the societies in which they operate. That is largely why so many in the public are demanding that (a) they respect one’s privacy and (b) that they take seriously the values of their customers (e.g., being environmentally friendly, inclusive in serving and hiring myriad racial, ethnic-identifying people).
CEOs are pledging to do better and to serve a broader range of stakeholders, while some are holding out. Nonetheless, it seems every high-tech firm almost anywhere is under the microscope of regulators, legislators, interest groups, customers, and societies at large. Historians, economists, business school professors, sociologists, and—just as important—some wise and experienced business leaders understand that how to deal with stockholder vs. stakeholder issues boils down to corporate values, policies, how it treats customers and employees, and their commitments to society—in short to corporate cultures. So many high-tech firms are so new, however, that they are, to be blunt about it, borderline clueless on how to shape, implement, and change their corporate cultures. Yet the growing empirical evidence is that managing corporate cultures may be more important than coming up with some clever business (product or service) strategy. College students founded Facebook in 2004, becoming the poster child for a firm that many feel is yet to be run by experienced executives, specifically Mark Zuckerberg, one of those students. Unlike at any major public firm Zuckerberg still has a majority of voting shares, which means he maintains ultimate control over Meta’s activities. Google was established in 1998, but eventually hired Eric Schmidt who was positioned in the media as the “adult” the company needed. And then there is the chaos unfolding at X (Twitter) that is causing much head-scratching.
So, if getting one’s corporate culture “right” promises peace with society and its regulators and legislators on the one hand and, as historians and business professors argue, promotes the economic success of a firm, where does a high-tech company learn what to do? In the world of IT, sources of positive examples are rare either because they are or more realistically because too many have not been around long enough to have successfully immediately effective corporate norms. A few are obvious: H-P, Microsoft, and increasingly Apple. They are obvious for being around long enough to have succeeded and made errors and corrected these to varying degrees. Old manufacturing firms, of which there are many, are not always seen as relevant case studies, although we have IBM, H-P, and Apple as potential case studies from which to learn. The most senior of these is IBM.

The hard facts about why IBM is the “Mother of All Case Studies” for the IT community is difficult to dispute. First, it was established in 1911 and is still in existence, making it the longest-running IT firm in history. Regardless of how one thinks about IBM, it survived and more often than not thrived, and when adding in the firms that originally made-up IBM, it has done so since the early 1880s.
Second, the empirical evidence increasingly demonstrates—as do my other books about the IT industry’s evolution, use of computing, and of IBM’s market share—that its tabulators dominated the early data processing industry’s supply for a half-century by 90 percent. Depending on which country one looked at, 70 to 80 percent of all mainframes came from IBM in the next 4+ decades, only dipping to about 50-60 percent in the USA due to more IT competition there than anywhere else in the world.
Third, from the 1930s to the late 1980s, IBM also dominated the conversation about the role of data processing (which included more than just about mainframes)—this third point being an important issue I have recently been studying. Attempts to shape public and regulatory opinions about Facebook (Meta), Twitter (X), Amazon, Google, and a myriad collection of Asian firms have bordered on either harmful to the firms (e.g., as is happening with X at the moment) or did nothing to move the needle either in public opinion or in the minds of their customers. My personal observations of the IT world over the past four decades have led me to conclude that the IT community has done a poor job in presenting its various perspectives and value propositions in ways that would enhance their economic and other roles in societies. That is due to mismanaging image creation and dominating the conversation—hence “everyone’s” perspective that would optimize the effectiveness of a firm. This third reason goes far to explain why so much negative press and impressions exist about, for example, social media and online merchants, despite everyone’s belief that they have no choice but to rely on these firms.
All of this is to explain why I wrote Inside IBM: Lessons of a Corporate Culture in Action (Columbia University Press, 2023). I followed frameworks and descriptions of what constituted corporate cultures produced largely by sociologists and business school scholars over the past half-century. I would recommend such a strategy for those looking at, say, IT companies. What these various studies pointed out was that there is a set of common issues to explore. Successful companies define very carefully what their values should be (e.g., at IBM’s “Basic Beliefs) that one can consider, as I phrase it, a form of “corporate theology”—it is there, remains pervasive, and people are held accountable for living it. In IBM’s case this involved providing high levels of customer service, respecting all individuals (that includes customers and citizens at large), and operating in a meritocratic environment where high quality work (“excellence”) was the expected outcome. They implement practices, policies, job descriptions, reward systems, training, and communications that support these values for decades. These are not the program-de-jour or this quarter’s initiative.
The culture experts also point out that such an ethos should attract, engage, and become part of the worldview of customers, regulators, and yes, families of employees. I devote an entire chapter just to how IBM as an institution interacted with families for nearly a century. The evidence made it very clear that this was an organized purposeful exercise, which went far to explain the nature of benefits provided and when (e.g., health insurance, pensions) and sponsored events (e.g., Christmas parties with Santa Claus, summer picnics, employee/spousal trips to Rome, Hawaii, or other resorts).
Keeping other IT companies in mind as possible candidates to study using IBM’s experience as an initial framework for how they might approach the topic, I divided my study into two large categories. The first was both an overview of theories of corporate culture and how they applied in large enterprises accompanied by specific case studies. The latter included discussions of IBM’s corporate theology, how the company functioned on a day-to-day basis (often called by scholars and employees “The IBM Way”), the role of managers in implementing IBM’s culture, how unions could not succeed in the US side of IBM, and employee benefits and relations with families, of course. I could have ended the study at this point because I had explained how IBM had developed and used its corporate culture from roughly the 1910s through the mid-1980s and have clearly documented an effective alignment of culture and strategy that proved successful. However, massive structural and technological changes in the IT world in the 1980s to the present jarred, harmed, and forced changes on IBM altering both in its strategy and culture. I tell that story right up front in an introduction to the historical study because I wanted to also reach current executives in both IBM and all over the industry about what works and does not. In fact, I tell them if all they want is “the answer” to what they should learn from IBM, that introduction is enough, along with perhaps the last chapter if they want their culture to be ubiquitous around the world. As a brief aside, IBM’s culture was universally applied and as Catholic IBM employees often pointed out, it was as consistent in values, rituals, reporting, and activities similarly to the temporal Catholic Church. And just like the Church, IBM implemented its practices essentially the same way. Its Latin was English, its bishops vice presidents, its cardinals, general managers, and so forth.
Then there is the second part of the book, in which we have a partial history of IBM’s material culture. Material culture concerns the cultural and archeological evidence left by a society or organization, in our case such items as products (e.g., computers, boxes with software in them or on floppies and tapes), baseball caps, and coffee mugs with the firm’s name on them, logoed publications, postcards of factories and other buildings, pens and pencils, and myriad other items. Business historians have, to put it bluntly, not studied these ephemera, especially the massive quantity of such objects that have been so popular in the IT industry. Access eBay and type in the name of any IT firm of your choice and you will be surprised, even shocked, at the volume of such material. These logoed items, and the objects used daily in buildings, did not come to be by accident. As a cultural anthropologist or advertising expert will point out, each carries a message, reflects a point of view, and says much about the culture of the firm—THINK signs for over a century (figure 1). But also true, as demonstrated in IBM’s case, they are tied to the purpose of the firm. A PC expert walked around wearing a PC-logoed baseball cap in the 1980s, and in the early 2000s an IBM Consulting Group employee might use a Cross pen with the gray logo of this part of IBM to make it known she was not a “box” peddler for Big Blue. And so, it went.
This book studies a number of types of ephemera to see what they reveal about IBM’s culture. These exposed a great deal normally not discussed in conventional historical monographs. For example, IBMers are considered serious no-nonsense people—an image carefully cultivated for decades in a consistent manner by the corporation that paid enormous dividends (remember “Nobody ever got fired for recommending IBM”?), but IBM had a vast long-standing humorous side to it internally. Thomas Watson, Sr. was presented as a furiously serious leader for over 4 decades while in charge of IBM. Yet he lived in a sales culture where humor and skits were the norm. See Figure 2—yes, he is wearing a Dutch wooden shoe and is smiling; he may well have just participated in a skit in a sales office; other CEOs did too.

Now look at Figure 3, which like so many floated inside the firm, showing a salesman as arrogant, elitist, and dressed quite fashionably. It reflected an attitude found in certain parts of IBM where attire was a humbler affair and where it was believed that the fastest way to build a career in IBM was through sales. Fair? True? It does not matter, because it was a company that employed hundreds of thousands of people and its divisions were like neighborhoods in a city, each with its own personality.

More seriously, IBM wanted the public to see IBM as solid, respectable, professional, and reliable. In the first half of the twentieth century companies all over the world produced postcards to depict such images; IBM was no exception. Watson came from what was considered at the time one of the best run most highly regarded firms in America—NCR—before he commissioned postcards that looked just like the ones he remembered at his prior firm. As IBM expanded and built new buildings, out came more postcards. Some modeled Harvard’s buildings with their trademark cupolas; these were obviously communicating that they were part of a financially solid enterprise. As the firm moved to establish itself as big and elite, a customer or visitor could only get these postcards if visiting such buildings, physically interpreting them. A customer could think, “I am privileged, I was able to be involved in a meeting with the Great IBM.” These were also mementos—think memorializing links—of someone’s connection to IBM (see Figure 4).

There is a detailed discussion of IBM’s machines complete with some of the most boring product pictures; every IT company produces such images; CBI has an impressive collection of these from many firms that no longer even exist. But these photographs carry messages and reflect corporate attitudes. For example, look at Figure 5 (3.3 p.95). We can agree that it is boring, that it would not win the photograph any prize. However, this 1948 image signaled that IBM thought of this new generation of hardware similarly as it had tabulating equipment for decades, seeing it as a data processing system, not a collection of various products (known colloquially as “boxes”). Note the discrete exposure of cables in front of the machines connecting them all together such that one entered data at one end and out came answers and reports on the other. In real life, those cables would have either been placed behind the machines out of sight or later under those ubiquitous, white-squared floorboards known later as “raised floors.” Dozens of product pictures later, the message was the same: modern, integrated, fashionably designed, big, powerful, and reliable.

IBM also became one of the largest publishers in the world; indeed, there is some dispute about whether it was THE largest one; I take the side of those who think it was the second biggest after the U.S. Government Printing Office. No matter, it was massive, making any collection of academic or trade publishers look tiny in comparison and its publication practices were far more efficient and different than what we experience as authors outside the firm. In the longest chapter in the book, I explain how many publications there were (are); how they were written, published, and distributed; and their purpose. It was not just to persuade someone to buy IBM, or how to use these machines, for the IBM customer engineers how to do that and keep them running or systems engineers how to debug problems. From its earliest days, IBM wanted to dominate what “everyone” worldwide thought about the use of all manner of data processing: customers, opinion makers, academics, executives, politicians, military leaders, employee families, and entire company towns (e.g., where IBM had factories), among others.
This study demonstrated that it had to have a consistent message across all its publications, had to do this for decades, and in sufficient volume to be heard above the din of its competitors and even those publishing in the IEEE, ACM, and trade houses producing books on computing (e.g., Prentice-Hall, Wiley, even MIT). Many of those other outlets published articles and books by IBM authors, again for decades (by me too). But, too, computing was complicated and required a vast body of new information. The chapter starts with a quote of one programmer: “I had perhaps 20 pounds of manuals at my desk to be able to do my job. It was a very paper-intensive time.” An IBM customer engineer called in to fix a machine—and there many thousands of them—came to a data center armed with a fat volume that looked like a Bible with a similar trim size and black leather binding, which they used every working day of their careers. In fact, when they retired, they often kept their personal copy barely suggests the quantity of pages and dense technical text, but the chapter does not spare the reader. Dictionaries of IT terms ran into the many hundreds of pages and had to be constantly updated throughout the second half of the twentieth century, indirectly serving as testimonials to the vast amount of incremental changes that occurred in technology. Pages were replaced in the customer engineering black volumes once or twice weekly with updated ones, done for decades.

Every facet of IBM’s business practices, culture, beliefs, rituals, and events were subjects of company publications. Please read that sentence again. Hundreds of thousands of employees and millions of other people encountered IBM publications and yet no histories of these have been written. Other large enterprises also engaged in extensive publishing—an important finding in this study, because they learned from each other and all needed publications for similar reasons.
Today, Facebook, Apple, and Amazon may have YouTube videos, but IBM and other large firms made training movies, and later videos, beginning in the 1930s; IBM’s archive has a large collection of its own productions. Hollywood’s output was, to be frank about it, tiny in comparison to what collectively such large enterprises did, such as Ford, GM, GE, NCR, and IBM, among others. Their movies and videos projected corporate images integrated into all other message-delivering media for decades and yet these, too, have not been studied. This book about IBM suggests that as with other corporate gray literature, there is much to be learned about other companies and institutions functioning in the world of IT.
An issue that rattles around discussions of corporate culture concerns the level of flexibility in tailoring behavior and operations at some local level, recognizing that business practices in France may or will be different than, say, in Vietnam or New Zealand. What IBM’s experience indicates is that corporate values can essentially be universal, that rituals can be too good for employees, but that processes and operations can be diversified to accommodate local customs and laws. But there are also ground rules for everyone. For instance, American corporations are not allowed by US law to engage in bribery, yet in some countries such practices are common. IBM does not care for that custom; its employees are not allowed to participate in such behavior. On the other hand, for decades IBM had a global no liquor consumption rule most strictly enforced in the United States, but one could find wine in a French IBM cafeteria. Effective corporate cultures can be flexible in how they behave without compromising the core beliefs, image, and ubiquitous practices of the enterprise. That is why, for instance, Apple stores look very much the same around the world, and its products are universal.
Finally, there is the question of how did IBM so standardize its values, behaviors, rituals, practices, managerial behaviors, messages, and expectations of its employees in what ultimately involved some 175 countries. Part of the answer was provided in my earlier publication (IBM: The Rise and Fall and Reinvention of a Global Icon, 2019) where I emphasized lay in the expansion of similar sales, manufacturing, and research facilities around the world, with shared product development, similar sales practices and comprehensive actions to thwart competitors—all traditional business history themes. However, Inside IBM goes a step further, calling the diffusion of a corporate-wide culture the essential strategy for success. Why? How? These are questions currently engaging IT firms and more broadly large national and multinational companies around the world, which brings us back to the point made at the start of this paper that corporations had multiple stakeholders to whom they must serve.
In IBM’s case, several behaviors made universalizing its practices possible. Early on management established a set of beliefs that it refused to alter for decades, maintaining these for so long that eventually, employees could not remember a time or set of values that were different, at least not until late in the twentieth century when much changed. Second, other customers—largely international corporations—encouraged IBM to be able to install products and services on their behalf that were consistent worldwide. IBM was able to do that because it implemented similar practices around the world. That is how, for example, if a large multinational firm wanted an integrated global supply chain it could come to IBM to actually implement it.
Third, generations of managers were trained in the values, culture, and practices of the firm that were consistent, then were called upon to implement and preserve these as a central feature of their work. American management teams exported their ways worldwide and as new generations of employees grew up in that culture outside the US, they exported these to other countries. For example, an IBM manager who grew up in IBM Spain would, upon becoming an executive in Central or South America, would implement—think practice, behave—as earlier in Spain.
Fourth, communications regarding anything IBM was purposefully kept consistent worldwide for decades. Rules, practices, and guidelines were communicated and kept (or changed) in a consistent manner worldwide to the degree that local custom made possible. Much commonality proved possible to maintain in most countries and whenever not so, the firm was prepared to evacuate from a national market for as long as required (e.g., India, South Africa, and recently Russia).
While historians of the IT world are strongly attracted to its technology, as is occurring to the Internet’s technological evolution, emergence, and diffusion, so too there is growing interest in the less precisely defined issues of managerial practices, sociology of institutions, role in society, and so forth. Corporate culture has long been seen in business managerial studies as crucial and it seems today that so many CEOs are declaring that to be so. IBM has much to teach today’s generation of scholars and executives how that was done effectively, even if occasionally poorly.
Finally, I should point out what is missing from my research—so far—and from this article: the role of minorities in multiple countries, and specifically in the U.S. side of the business. Women in IBM, in particular, are insufficiently studied, yet they played roles common to many corporations in each decade and, as in other firms, slowly expanded their positions broadly and upwardly in the company’s managerial ranks. But that is a story that requires a far deeper dive than I have been able to perform—again so far—in this article or in my earlier studies.
Bibliography
Cortada, James W. (2018). Change and Continuity at IBM: Key Themes in Histories of IBM. Business History Review 92(1), 117-148.
Cortada, James W. (2019). IBM: The Rise and Fall and Reinvention of a Global Icon. (MIT Press, 2019).
Cortada, James W. (2023). Inside IBM: Lessons of a Corporate Culture in Action. (Columbia University Press).
Denison, Daniel R. (1997). Corporate Culture and Organizational Effectiveness. (Denison Consulting).
Hofstede, Geert. (1991). Cultures and Organizations: Software of the Mind. (McGraw Hill).
Kotter, John P. and James I. Haskett. (1992). Corporate Culture and Performance. (Free Press).
Schein, Edgar H. (1985, 1989). Organizational Culture and Leadership. (Jossey-Bass).
James W. Cortada (February 2024). “High Tech Corporate Culture: IBM’s Experience and Lessons.” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 11-22.
About the author: James W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing in business. He is the author of IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). He is currently conducting research on the role of information ecosystems and infrastructures.
Turing Award winner Geoffrey Hinton and former Google chief executive Eric Schmidt are only two among many people who have recently voiced their concerns about an existential threat from AI. [See, for example, Brown 2023; Roush 2023.] But what does it mean for a technology or, for that matter, something else such as a natural disaster, a war, or a political ideology to be an existential threat? An existential threat must be directed at a particular target audience that is threatened and to a particular aspect of their life that is at risk. We only care, as a society, about existential threats if the target audience is dear to us and the item at risk is significant. The consequences of an existential threat may be biological, psychological, economic, organizational, or cultural. Thus, multiple academic fields provide a perspective on existential threat.
In this article, our goal is not to argue about whether AI is or is not an existential threat—or even to describe who AI threatens and what elements are at stake. Instead, our focus here is to provide the reader with a multidisciplinary toolset to consider the concept of existential threat. To achieve this, we introduce the reader to four organizations that study this topic, along with the relevant literature by individual scholars. It is beyond the scope here to apply these various perspectives on existential threat to developments in AI.

Research Institutes
We begin by profiling four research institutes: the Käte Hamburger Center for Apocalytic and Post-Apocalytic Studies, the University of Cambridge Centre for the Study of Existential Risk, the University of Oxford Future of Humanity Institute, and the Stanford (University) Existential Risks Initiative. (A much longer list of relevant organizations can be found on the Stanford Existential Risks Initiative webpage: https://seri.stanford.edu/resources.) [Editors' note: links to all websites mentioned are in the Bibliography]
Käte Hamburger Center for Apocalyptic and Post-Apocalyptic Studies
The Käte Hamburger Center for Apocalytic and Post-Apocalytic Studies (CAPAS) at Heidelberg University, which began operation in 2021, is one of 14 research centers sponsored by the German Federal Ministry of Education and Research. We spend more time on this center because it is probably the least familiar of these centers to the American reader. The goal of CAPAS is to study the “effects of catastrophes and end-time scenarios on societies, individuals, and environments”. CAPAS expands on this goal on the About page of its website:
By focusing on social mechanisms and coping strategies of crises through the prism of the apocalypse, CAPAS provides applied knowledge about successful and failed responses to existential threats and their consequences. On this basis, the Centre contributes to topical debates on potential ecological and social breakdown. CAPAS thus sees itself as a platform for reflection on perceived doom that helps to strengthen societal and political resilience to end-of-life scenarios of all kinds and to anticipate social risks.
In its first years, the center has focused on a strength of its home university in the study of South/East Asia and the Americas by examining “the apocalyptic imaginary that frames the COVID-19 pandemic”. This interest goes beyond this crisis to examine other kinds of apocalyptic events such as world environmental crises and digital control over human thought and action. The current lecture series, for example, includes topics related to the apocalypse and post-apocalyptic thinking in various ways, concerning smart cities, Orientalism and Western decline, desert architectures, raising children, psychanalysis, and concepts of time.
The Center employs a transdisciplinary approach that includes the humanities as well as the social and natural sciences. As the Center explains on their Our Premises page: “Thus, in addition to empirically based natural and social sciences, CAPAS uses the potential of interpretive humanistic approaches to comprehensively reconstruct and analyse conceptions and experiences of apocalypses and post-apocalyptic worlds. Thereby, the humanities allow us to investigate and challenge possible future scenarios that are beyond the predictive capacity of purely empirical sciences.”
The Center invites ten fellows from around the world each year. As outlined on the Fellows page, the Fellow’s research at the Center must relate to: “our three research areas (A) the apocalyptic and postapocalyptic imaginary (ideas, images, discourses), (B) historical events that were perceived or framed as (post-)apocalyptic experiences, and (C) current, empirically observable developments that could bring about the end of the world as we know it and its aftermath scenarios.”
In addition to the fellowship program, the Center offers public lectures, a bi-weekly newsletter, and a quarterly magazine. The directors of the Center are Robert Folger, a professor of Romance Literature, and Thomas Meier, an archeologist who studies pre- and early history, both professors work at Heidelberg University.

Stanford Existential Risks Initiative
In 2019 Stanford University created the Stanford Existential Risks Initiative to engage the student body, faculty, and outside community in the study and mitigation of existential risks, which they define as:
risks that could cause the collapse of human civilization or even the extinction of the human species. Prominent examples of human-driven global catastrophic risks include 1) nuclear winter, 2) an infectious disease pandemic engineered by malevolent actors using synthetic biology, 3) catastrophic accidents/misuse involving AI, and 4) climate change and/or environmental degradation creating biological and physical conditions that thriving human civilizations would not survive.
The initiative teaches a freshman course on Preventing Human Extinction, offers a postdoctoral fellowship in the areas of advanced artificial intelligence, biological risks, nuclear risks, and extreme climate change; a summer undergraduate research fellowship; an annual conference (2023 conference proceedings); a speaker series; a discussion group; and an online Cascading Risks survey open to adults.
The initiative is run by Paul Edwards, a professor of science, technology, and society who is well known in the history of the computing community for his monographs The Closed World and A Vast Machine [Edwards 1997, 2010], and Steve Luby, a professor of medical and health policy; together with the student organizer, Michael Byun, who is an undergraduate computer science major.
Cambridge Centre for the Study of Existential Risk (CSER)
In 2012, the University of Cambridge began to reallocate resources to form an interdisciplinary research center to study how to reduce existential risks associated with emerging technologies. The first postdocs were appointed in 2015. The co-founders were the Astronomer Royal and professor at Cambridge of Cosmology and Astrophysics Martin Rees, co-founder of Skype and Kazaa Jaan Tallinn, and Bertrand Russell Professor of Philosophy at Cambridge Huw Price. The director is Matthew Connelly, professor of international and global history at Columbia University. The Centre’s goals are:
- To study extreme risks associated with emerging and future technological advances;
- To develop a methodological toolkit to aid in the perception and analysis of these risks;
- To examine issues surrounding the perception and analysis of these risks in the scientific community, the public and civil society, and develop strategies for working fruitfully with industry and policymakers on avoiding risks while making progress on beneficial technologies;
- to foster a reflective, interdisciplinary, global community of academics, technologists, and policymakers; and
- to focus in particular on risks that are (a) globally catastrophic in scale (b) plausible but poorly characterized or understood (c) capable of being studied rigorously or addressed (d) clearly play to CSER’s strengths (interdisciplinarity, convening power, policy/industry links) (e) require long-range thinking. In other words, extreme risks are where we can expect to achieve something.
The Centre is working to have several impacts: to influence policy in various national and international governmental bodies; to build, through “collaborations, media appearances, reports, papers, books, workshops – and especially through our Cambridge Conference on Catastrophic Risk” a community of people from various walks of life to help reduce existential risk; to convene experts from academic, industry, and government to share cutting edge knowledge about relevant topics such as biological weapons; and to build an academic field concerning long-term AI safety.
In 2023, CSER was made a part of Cambridge’s newly created Institute for Technology and Humanity, (ITH) which is dedicated to research and teaching “that investigates and shapes technological transformations and the opportunities and challenges they pose for our societies, our environment and our world.” This organizational structure may lead to a collaboration with two other centres that are now part of ITH: Leverhulme Centre for the Future of Intelligence and Centre for Human-inspired AI.
Oxford Future of Humanity Institute
In 2005, University of Oxford philosopher Nick Bostrom created the Future of Humanities Institute as a part of a larger new initiative called the Oxford Martin School. Much of the focus has been on the existential risk of both the natural and the human-made varieties, studied using the tools of mathematics, philosophy, and the social sciences. For example, Bostrom and Milan Circovic published an edited volume on Global Catastrophic Risks [Bostrom and Circovic 2008], and between 2008 and 2010, the Institute hosted the Global Catastrophic Risks Conference. For more than a decade, the Institute has been concerned about AI, e.g., with the publication of Bostrom’s Superintelligence: Paths, Dangers, Strategies [Bostrom 2014]. Currently, the Institute has research groups which are working in the following areas: macrostrategy [the impact of present-day activities on the long-term fate of humanity], governance of AI, AI safety, biosecurity, and digital minds.
The Institute, and Bostrom in particular, has been an influential voice in public discussions of existential risk, for example in giving policy advice to the World Economic Forum and the World Health Organization. Bostrom and colleagues at the Institute have been very active in their publication, with academic, policy, and general public audiences in mind. (See the extensive list of publications on the Future of Humanity Institute Publications page and the list of Technical Publications.)
Much of the Institute’s work is done in collaboration with other organizations, including the Centre for Effective Altruism, with which they share space, and with their parent organization, the Oxford Martin School. Other collaborators have included, from the early days, DeepMind—and more recently, the Partnership on AI (which includes DeepMind, OpenAI, Facebook, Amazon, IBM Microsoft, Sony, Human Rights Watch, UNICEF, and others).

Academic Literature on Existential Threat
While there have been many historical studies of individual catastrophes, from the fall of the Roman Empire to the Titanic to COVID-19, there are a few books that provide us with a historical overview of various kinds of existential threats. Some examples include a book by the award-winning television reporter and producer John Withington, Disaster! A History of Earthquakes, Floods, Plagues, and Other Catastrophes [Withington 2012], which gives a long history since antiquity; or, with a focus on catastrophes in the post-WW2 era, see the book by the Marxist professor of European studies from Kings College, London, Alex Callinicos, entitled The New Age of Catastrophe and by history professor at University of Massachusetts Boston, Lisa Vox, Existential Threats: American Apocalyptic Beliefs in the Technological Era. [Withington 2012; Callinicos 2023; Vox 2017] Also see a useful article by Thomas Moynihan on the intellectual history of existential risk and human extinction. [Moynihan 2020]
Harvard Business School professor Clayton Christiansen published a best seller in the business and management literature, entitled The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. [Christiansen 1997] While many technologies engender new product development, Christiansen points out that some innovations can be “disruptive”—causing poorer product performance and possibly even the failure of individual products, companies, or industries. For example, small Japanese motorcycles were disruptive to the typical large motorcycles sold in America by BMW and Harley Davidson, while health maintenance organizations were disruptive to traditional health insurers. Or more pertinent here, transistors and microcomputers were disruptive in the computer industry. This led to extensive business and management literature on the issue of disruptive technologies and their threat to products, firms, and industries—extending and refining Christiansen’s model, finding “solutions” to it, and applying it in various contexts.
Nick Bostrom, the founder of the Oxford Institute mentioned above, is the foremost scholar from academic philosophy studying human-created existential threats, such as might occur with AI, nanotechnology, or synthetic biology. He has written on such topics as comparison between human-created disasters and natural disasters, the possibilities of superintelligence through AI and ways of categorizing superintelligence, and policy approaches to address existential threats. While his work is widely cited and praised, his theories have some critics including the philosophers Daniel Dennett and Oren Etzioni; and he has become controversial for both racist emails he has sent and his advocacy of the life philosophy of effective altruism, which some women believe create a culture of sexual harassment and abuse. [See Bilyard 2023; Alter 2023] In addition to Bostrom’s two books cited above, also see his Anthropic Bias and Human Enhancement. [Bostrum 2002; Savulescu and Bostrom 2009]
For some time, social psychologists have been interested in the psychology of risk, and there has been a significant body of empirical research on the topic. These scholars are interested in how people perceive and analyze risk, how they determine the amount of risk associated with a particular hazard, how they behave towards and make decisions about risk, how individuals communicate with others about risk, behavior towards and attitudes about accidents, how risks are treated by society, and how risk is connected to one’s identity. [See, for example, Breakwell 2014.] Breakwell writes:
Since the first edition of The Psychology of Risk was published in 2007, the world has changed. The world is always changing but the years since 2007 have seen enormous macro-economic and socio-political changes globally—the chaos in the world banking system and the financial crisis and recessions that it presaged; the Arab Spring and the revolutionary shifts in power in the Middle East with rippled consequences around the world; the development of ever-more sophisticated cyber-terrorism that can strike the private individual or the nation-state with equal ease. …The question then arises: amidst these changes in the face of hazard, do the psychological models earlier built to explain human reactions to risk still apply? Do they need to be modified? [Breakwell 2014, p. xii.]
With risk taken to the extreme of existential risk, there is additional scholarship in this area in recent times. [See, for example: Currie 2019; Jebari 2015; Klisanan 2023; Kreutzer 2021; Lawn et al. 2022; Ord 2020; Schubert, Caviola, and Faber 2019; Schuster and Woods 2021; Syropoulos et al. 2023a, 2023b, 2023c]
Sometimes, when people claim that AI or some other event is an existential threat, their claim is simply an unexplored rhetorical strategy to cry wolf. Many of these utterances are unaccompanied by a careful examination of the exact nature of the threat or of the particular consequences that might ensue. The readings pointed to here provide the reader with a variety of approaches they might use to examine more thoroughly these potential threats and their potential consequences—examining, for example, both the various kinds of existential threats that may arise and how these threats compare to one another; and various lenses for examining the technical, political, economic, and cultural consequences they might have.
Thanks to my colleagues Jeff Yost and Jim Cortada for comments on an earlier draft.
Bibliography
Alter, Charlotte. 2023. Effective Altruism Promises to Do Good Better. These Women Say It has a Toxic Culture of Sexual Harassment and Abuse, Time (February 3), https://time.com/6252617/effective-altruism-sexual-harassment/ (accessed 6 December).
Bilyard, Dylan. 2023. Investigation Launched into Oxford Don’s Racist Email, The Oxford Blue (15 January).
Bostrom, Nick. 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy (Routledge).
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies (Oxford University Press).
Bostrom, Nick and Milan Circovic, eds. 2008. Global Catastrophic Risks (Oxford University Press).
Breakwell, Glynis M. 2014. The Psychology of Risk (2nd ed., Cambridge University Press).
Brown, Sara. 2023. Why Neural Net Pioneer Geoffrey Hinton is Sounding the Alarm on AI, MIT Management (23 May), https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai#:~:text=Hinton%20said%20he%20long%20thought,things%20humans%20can%27t%20do (accessed 6 December 2023).
Callinicos, Alex. 2023. The New Age of Catastrophe (Polity).
Cambridge Centre for the Study of Existential Risk (CSER). Our Mission.
Cambridge Centre for the Study of Existential Risk (CSER). About us.
Christiansen, Clayton. 1997. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (Harvard Business School Press).
Currie, Adrian. 2019. Existential Risk, Creativity & Well-adapted Science, Studies in History and Philosophy of Science Part A76: 39-48.
Edwards, Paul N. 1997. The Closed World: Computers and the Politics of Discourse in Cold War America (MIT Press).
Edwards, Paul N. 2010. A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (MIT Press).
Jebari, Karim. 2015. Existential Risks: Exploring a Robust Risk Reduction Strategy." Science and Engineering Ethics 21, no. 3: 541-554.
Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies (CAPAS). About Us.
Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies (CAPAS). Fellowships at CAPAS.
Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies (CAPAS). Our Premises.
Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies (CAPAS). Research.
Klisanin, Dana. 2023. Existential Risk: From Resilience to Antifragility." Intersections, Reinforcements, Cascades: 50-59.
Kreutzer, Philipp Jonas. 2021. Would You Think We Are Doomed Because of Climate Change?, School of Economics and Management, Lund University, https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9058790&fileOId=9058792 (accessed 6 December 2023).
Ladden-Hall, Dan (12 January 2023). "Top Oxford Philosopher Nick Bostrom Admits Writing 'Disgusting' N-Word Mass Email". The Daily Beast. Retrieved 12 January 2023.
Lawn, Erin CR, Luke D. Smillie, Luiza B. Pacheco, and Simon M. Laham. 2022. From Ordinary to Extraordinary: A Roadmap for Studying the Psychology of Moral Exceptionality." Current Opinion in Psychology 43: 329-334.
Moynihan, Thomas. 2020. Existential Risk and Human Extinction: An Intellectual History, Futures 116.
Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. Hachette Books, 2020.
Oxford Future of Humanity Institute. About FHI.
Oxford Future of Humanity Institute. Publications.
Oxford Future of Humanity Institute. Technical Reports.
Roush, Ty. 2023. Ex-Google CEO Warns Current AI Guardrails ‘Aren’t Enough’—Likened Development to Nuclear Weapons, Forbes (November 28), https://www.forbes.com/sites/tylerroush/2023/11/28/ex-google-ceo-warns-current-ai-guardrails-arent-enough-likened-development-to-nuclear-weapons/?sh=e075dbcf84be (accessed 6 December 2023).
Savulescu, Julian and Nick Bostrom, eds. 2009. Human Enhancement (Oxford University Press).
Schubert, Stefan, Lucius Caviola, and Nadira S. Faber. 2019. The Psychology of Existential Risk: Moral Judgments about Human Extinction." Scientific Reports 9, no. 1: 15100.
Schuster, Joshua, and Derek Woods. 2021. Calamity Theory: Three critiques of Existential Risk (University of Minnesota Press).
Stanford Existential Risks Initiative. Resources.
Stanford Existential Risks Initiative. Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risks Conference.
Syropoulos, Stylianos, Kyle Fiore Law, Gordon Kraft-Todd, and Liane Young. 2023a. The Longtermism Beliefs Scale: Measuring Lay Beliefs for Protecting Humanity’s Longterm Future.
Syropoulos, Stylianos, Kyle Fiore Law, Matthew Coleman, and Liane Young. 2023b. A Future Beyond Ourselves: Can Self-oriented Prospection Bridge Responsibility for Future Generations?
Syropoulos, Stylianos, Kyle Fiore Law, and Liane Young. 2023c. Caring for Future Generations: Longtermism and the Moral Standing of Future People.
Vox, Lisa. 2017. Existential Threats: American Apocalyptic Beliefs in the Technological Era (University of Pennsylvania Press).
Withington, John. 2012. Disaster! A History of Earthquakes, Floods, Plagues, and Other Catastrophes (Skyhorse).
William Aspray (January 2024). “Is AI an Existential Threat? Let’s First Understand What an Existential Threat Is.” Interfaces: Essays and Reviews on Computing and Culture Vol. 5, Charles Babbage Institute, University of Minnesota, 1-10.
About the author: William Aspray is Senior Research Fellow at CBI. He formerly taught in the information schools at Indiana, Texas, and Colorado; and served as a senior administrator at CBI, the IEEE History Center, and Computing Research Association. He is the co-editor with Melissa Ocepek of Deciding Where to Live (Rowman & Littlefield, 2021). Other recent publications include Computing and the National Science Foundation (ACM Books, 2019, with Peter Freeman and W. Richards Adrion); and Fake News Nation and From Urban Legends to Political Fact-Checking (both with James Cortada in 2019, published by Rowman & Littlefield and Springer, respectively).