Interfaces Volume 1 (2020)

Essays and Reviews in Computing and Culture 

Kids at computer

Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences. 

Editors: Jeffrey R. Yost and Amanda Wick

Cultural Networks: Infrastructural Implications of AT&T’s Picturephone

Malinda Dietrich, University of Colorado, Boulder

Abstract: In 2020, video telecommunications seem ubiquitous. Between work and play, many people use a range of software to connect them with other people all around the world. This short essay begins to explore how we arrived at this seemingly universal technology by exhuming a failed technology: AT&T’s Picturephone. Through this historical exploration, we will come to see that infrastructure and culture are closely related, and that future work must be done to explore the social inequities that become apparent.

(PDF version available for download.)

 

Introduction

Regardless of how commonplace video calls are now, they are the result of continuous iterations of a failed technology from almost half a century ago called the Bell Laboratories’ “Picturephone.” First showcased during the 1964 World’s Fair, the first Picturephone demonstrated how people on a call could see one another, at 30 frames per second and in black and white, on small screens (“Mechanical ‘Brains’, Lasers”). The cylindrical device housed a screen on one end and connected to a handset that allowed the user to control the screen (Gertner). Using this technology, two men—one in New York and the other at Disneyland in Anaheim, California—completed the first transcontinental video call which lasted about ten minutes (Gerber; “Television Phone Used”). After the demonstration, visitors waited in line for a turn to use the machine and speak on one of the picture phones in one of six booths (Schnaars and Wymbs; see Figure 1). After the World’s Fair, market researchers conducted 700 interviews with individuals who attended the World’s Fair and used the Picturephone (Gertner; Schnaars and Wymbs). Most users rating the service well, and criticism focused on design features like the inability to turn off the video or adjust the height of the device (Schnaars and Wymbs). Bell Laboratories hoped that this would ignite widespread interest in the device, anticipating a Picturephone would live inside the homes of most people by the 1980s (see authors in the Bell Laboratories archive).

 

Figure 1: Picturephone Booth’s at the 1964 World’s Fair.
Figure 1: Picturephone Booth’s at the 1964 World’s Fair.

Scholars who have highlighted the failure of the Picturephone point to the many infrastructural issues (see Gertner; Lipartito; Schnaars and Wymbs). It was expensive to lay supplemental wires to transmit a video signal, increasing the price of the service, and meant that picture quality was poor. Missing from these accounts of the Picturephone is an understanding of infrastructure’s relationship to culture. Kenneth Lipartito begins to create this tie by focusing on the material and discursive formations of the Picturephone. His work focuses on the rhetoric of failure, using the Picturephone as an example (Lipartito). The technology worked from an engineering perspective; however, Bell had not considered that this service did not fill a consumer need, or that it needed a critical mass to take off (Schnaars and Wymbs). In other words, just because the infrastructure works, did not guarantee the popularity of this technology; therefore, this article tries to use the Picturephone as an historical artifact that allows us to speak through cultural relations to infrastructure.

“Picturing” the Future of Telecommunications in the Late 1800s

The idea of a Picturephone can be traced to around the same time that the telephone was taking off. On December 9th, 1879, George du Maurier published an illustration of “Edison’s Telephonoscope” in Punch magazine, in which two individuals are depicted using a technology that resembles a phone, megaphone, and television combined to communicate (see figure 2). While the reading of this image is contested—some arguing this image is a satire of Thomas Edison and his future inventions (Roberts)—others have projected that this image resembled what a video phone or a television would look like almost 100 years later (Burns). While this image could be read as one of the earliest conceptions of a video telephone, it was not until the 1920s that the first formal attempts at video conferencing began.

 “Edison’s Telephonoscope (transmits light as well as sound)”.
Figure 2: “Edison’s Telephonoscope (transmits light as well as sound)”. An illustration by George du Maurier. (Public Domain)

Video telecommunication, at the most simplistic level, requires sending images along with a telephone signal. By the 1920s, the cables and wires used for the telegraph as well as the telephone had been installed. Alexander Graham Bell had established the companies that would eventually become American Telephone and Telegraph Company (AT&T) in 1884 (“The History Brands”). The research branch of AT&T, Bell Laboratories, would work with one of the engineers and inventors of the first mechanical television (which was released in 1926) to complete the first video conference call in 1927 (Hanhardt, see also McGoogan). This call, a two-way audio connection with a one-way video connection, connected Secretary of Commerce Herbert Hoover and other officials in Washington, D.C., with AT&T president Walter Gifford in New York City (Turi). A few years later, in 1931, Bell Laboratories held the first public demonstration of a two-way videophone (Guernsey). The system for this demonstration used early television equipment and a closed circuit (Turi). Due to the Great Depression, further iterations of video telephones stalled; the technology lacked efficiency and reliability, and it needed further development that required more funding (Guernsey).

On December 23, 1947, the transistor was successfully demonstrated at Bell Laboratories (Shampo et al.). The transistor became an integral part of many (if not all) modern electronic devices—it served as an amplifier for power, removing the need to rely upon display tubes, as well as a switch for digital devices. Following this invention, and a few years to figure out how to implement it, the 1950s brought in more instances of images sent over telephones lines. In 1955, the Mayor of Palo Alto called the Mayor of San Francisco using a videophone developed by Kay Lab of San Diego (“Gawkie-Talkie”). According to a Chicago Tribune article, this video phone was anticipated to take off in the 1960s, particularly in factories and hospitals. About a year later, hospitals and the U.S. defense department demonstrated interest in X-Ray pictures being sent over the phone and Bell Labs developed their first phone that transmitted pictures along with sound (“Sending X-Ray Pictures”; “Phone that transmits”). The phone, not introduced as the Picturephone yet, included a 2-inch by 3-inch screen and utilized a pair of ordinary telephone wires to display one picture every two seconds (“Phone that transmits”; “Scientists See Picture-Phone”; Gould).

AT&T tried to use the wiring and other infrastructures available to them (and that they held a monopoly on) for the Picturephone. To Geoffrey C. Bowker and Susan Leigh Star, “good infrastructure is hard to find” (p. 33)—the easier the systems are to use, the more the material systems recede into the backgrounds of our minds. On one hand, the Picturephone demonstrated this theorization of infrastructure: tens of thousands transistors went into the creation of the picture telephone (Gertner, p. 190), while now, billions of transistors (now at the size of ten nanometers and continuing to grow smaller) go into a single central processing unit (Gertner, p. 209). On the other hand, using a Picturephone was clunky and not intuitive enough for the use in people’s homes.

Picturephone Model I: Not Your Average Payphone or Telephone Booth

A few months after the Picturephone’s appearance at the World’s Fair, Lady Bird Johnson assisted AT&T in kicking off its commercial Picturephone service, to make a call from Washington, DC to New York City. Mrs. Johnson spoke for five minutes to Dr. Elizabeth A. Wood, a scientist from Bell System’s laboratories, on a screen 4⅜-inches by 5¾-inches (“Picture Phones Go”; see figure 3). Unlike at the World’s Fair, individuals did not have to sit completely still to be seen by the other individual.

Lady Bird Johnson uses the Picturephone to make a call from Washington, DC to New York City.
Lady Bird Johnson uses the Picturephone to make a call from Washington, DC to New York City.

The Picturephone commercial service began in three cities: New York, NY; Washington, DC; Chicago, IL; and in a designated Picturephone booth (“Picture Telephone Ready”). New York’s booth was located in the Grand Central Station terminal, whereas Washington’s was located at the National Geographic Society Building, and Chicago’s in Prudential Insurance Building (“Picture Telephone Ready”). In order to use the Picturephone, users had to make an appointment for a particular time. The calls had differing costs: $16 for first three minutes between New York and Washington, $21 between Chicago and Washington, $27 between Chicago and New York (“Picture Phones Go”; this is a cost ranging from $130 to $224 in 2020). Bell Labs attempted to market these booths for many different purposes such as home buying, business communication, sales, and even demonstrating hair styling (Sloane). Some of these purposes worked for some individuals—one New Jersey couple did find and buy a home in Chicago from the Grand Central Station terminal (“N.J. Couple Selects”) while other businesses sought to limit travel (“Video Phone Held”). However, most people did not like the booth services, finding appointments burdensome and the cost excessive (Gertner). As a result, Bell opted to focus their attention towards companies and corporations.

Picture This: Company Usage of Video Phones

Beginning in 1967, the Picturephone began the slow process of integration within corporations. Per John Wilford, the invention of a new compact and more durable television camera tube would help the systems go into commercial trial the next year, in 1968 (p. F1). Wilford suggested similar uses to what was proposed for the booth services, such buying products or limiting business travel. He also wrote about the phone’s infrastructure: the Picturephone utilized the same wires as other telephones but required the use of two extra pairs for transmitting and receiving the video signals (Wilford). The Picturephone also relied upon “digital transmission systems” which, in Wilford’s words:

“take a telephone signal, either voice or video, and turn its waveforms into electrical voltages represented by computer language [binary]. It then breaks the signal into a stream of coded electrical pulses...each [telephone] line is capable of handling several million pulses a second.” (p. F1)

Over a year and a half later, another article was published by Frank Wells, detailing, again, how Picturephone service is “feasible” and expected to be offered commercially in early 1970 (p. 8). In Wells’s article, he discussed how a new L-4 coaxial cable system would be capable of carrying 32,400 telephone conversations at the same time, and he concluded that this would be beneficial for business communication by limiting travel (p. 8).

In 1969, Westinghouse, an electric company based in Pittsburgh, PA, became one of the first companies to exhibit “how well” the AT&T Picturephone “worked” (“Westinghouse Tests New”, pg. 51). Westinghouse signed a six-month contract to test 40 Picturephone sets, 29 in Pittsburgh and 11 in New York (“Westinghouse Tests New”). AT&T installed the Picturephone Mod II, which included a wider screen at 5½-inch wide by 5-inches high (“Westinghouse Tests New”), controls that allowed the user to position the camera height and boost the contrast of the image on the screen (“Westinghouse Tests New”), and features such as faxing and group videoconferencing (Schnaars and Wymbs). It is interesting that AT&T was trying to include faxing into these Picturephones, as the facsimile or fax machine was a technology that failed in certain historical moments and flourished in others (Coopersmith).  

In trying to garner more business interest in the device, the May/June 1969 edition of the Bell Laboratory Record detailed how the Picturephone worked and how it could be easily integrated into professional life. In particular, secretaries (or “attendants” per the article) would be able to utilize the voice-only functions of the service in communicating with their boss, and they would serve as a type of operator internally (Harris and Williams). These jobs fell primarily on women (see Bureau of Labor Statistics: https://www.bls.gov/mlr/2006/03/art3full.pdf), yet the Picturephone was created and marketed to primarily men.

The Pittsburgh commercial service became officially inaugurated on July 1st, 1970 (“Picture-Phone Service”; Janson; “Dial a Friend’s”). Thirty-eight sets of Picturephones were installed for eight companies in Pittsburgh (“Dial a Friend’s”). As reported by an article in the Chicago Tribune, the emphasis on the commercial service was due to high initial costs and charges (“Dial a Friend’s”). Installation charges cost $150 for the first year, and each company that has the Picturephones installed will pay $160 a month for service on the first set and $50 a month for each additional one—those costs do not include the 25¢ a minute charge if the company uses the Picturephone beyond 30 minutes a month. In today’s dollar, the companies paid over $1,300 a month to use a single Picturephone. Lipartito speaks to how, in an early survey, prospective users were willing to pay $125 a month to use the service; however, growth with consumers and infrastructure was integral to the technology’s success. Lawrence J. Barnhorst (who was the vice president and general manager of the Bell Telephone Company of Pennsylvania) is quoted as saying the primary objective of the company was to “reduce the cost” of the Picturephone to “be readily available to everyone”; the company was hopeful that mass production in the 1980s would reduce the rates to make the technology more affordable (“Dial a Friend’s”). In other words, AT&T set the Picturephone price based on previous customer research, as well as the infrastructural cost. With this cost set, Donald Janson of The New York Times also anticipated that by 1975, over 100,000 Picturephones would be in use. Janson equated the extreme costs of that of the initial costs of the transatlantic and transcontinental telephone wires: what originally cost $75 in 1927, became $6 in 1964 with more popularity and use. The Picturephone, however, was still struggling to find its market for its popularity to rise in the first place.

With the now 5-inch x 5½-inch screen that displayed a “clear and sharp” image in black and white, the Model II (Mod II) Picturephone had address privacy concerns and expressed new avenues of use (p. 1). For instance, when the Picturephone integrated into home use, these at-home users could shop by Picturephone, visit a hospital, hold a family reunion, or attend a lecture (“Dial a Friend’s”). Multiple articles even mentioned increased accessibility for those hard of hearing who could read lips using the Picturephone (“Dial a Friend’s”). Commercially, these phones could be used to communicate with a computer; these units, called “Data Sets” would make it possible for a Picturephone to display information from the computer (“Dial a Friend’s”). Janson stated that Bell Labs was working on color and three-dimensional images, as well as expanding the services to other cities beyond Pittsburgh, with Chicago being the next city (Janson; “Dial a Friend’s”).

This technology and the other networked technological structures have served as infrastructure for future projects and aspects of technologies we use today. The transistor is one example, but we can look to the Picturephone for a few other examples such as the sending of images, like X-rays, over telephone lines (“Gawkie-Talkie”). When the Picturephone was first being tested, hospitals were thought to be one of the main institutions to utilize the technology and even the defense department showed interest in being able to transmit images over these wires (“Sending X-Ray Pictures”). While there is a whole (separate) history on the transmission of wire photos, or images sent over telegraph and telephone lines, the Picturephone and the funds AT&T spent on adding more wiring for the installation, laid the groundwork for future transmissions. On a related note, the Picturephone also gave AT&T an opportunity to further study and improve the changes between early digital signals in order to transmit signals a further distance. Finally, someone could argue that the Picturephone, particularly through the “Data Sets,” was one of the first instances of a Graphic User Interface (GUI). To be clear, AT&T successfully avoided any anti-trust issues (such as explicitly moving into computational territory, which they were not permitted to do based on a 1956 anti-trust case) by using telephone wires (Lipartito). While these “data sets” were not (and are still not) recognized in the histories of GUI, likely because of this case, I believe that by allowing individuals to more easily “communicate” with their computers, the data sets proved to be a feature (and infrastructure) paramount to most of our computer usage today.

The Fizzle & Failure of the Picturephone

In January 1971, Jesse Glasgow published an article detailing how businesses in Maryland were hoping to receive Picturephone services for conference calls; however, Glasgow described the Picturephone, which at this point had been commercialized for six months, as still “in its infancy” (Glasgow, p. K7). Around the one-year anniversary of the Picturephone’s commercialization in Pittsburgh, Boyce Rensberger published an article describing the disappointment with the product that the Bell Corporation was experiencing. Only 16 Picturephones had been installed since 1970. Rensberger quotes Robert Sweeney, the marketing manager for Bell in Pittsburgh, as saying that “the thing hasn’t really grown the way we thought it would...it’s turned out to be an extremely successful device from an engineering design point of view, but the trouble is we just haven’t found our market” (p. 26). Although Rensberger cites the economic slump as part of the issue, he also claims the lack of long-distance service did not help (p. 26). Rensberger also quotes Joseph C. Rengel, executive vice president for nuclear energy systems at Westinghouse, explaining that the Picturephone is “not a very personal form of communication. There’s no color. You’re gray. I’m gray” (p. 26).

Per Kenneth Lipartito, by the end of 1972, Pittsburgh only had 32 sets of Picturephones in service. Chicago, the only other city to receive commercial service, peaked in 1973 at 453 users (Lipartito, p. 52). Picturephone existed through 1974 with a few customers paying $87.50 for the service, but by 1978, the remaining devices only existed on the desks at Bell Labs before quietly being removed (Lipartito, p. 52). The Picturephone had socially slipped and fiscally failed—only gathering the interest of a fraction of Bells users and costing the company around $500 million.

From an engineering standpoint, the technology of the Picturephone worked—it completed the job it was technically supposed to be completing. However, just because something works, does not mean that people will use it. People support infrastructures just as they support us and our activities. The telegraph, and even early telephone usage, had people serving as infrastructure through their positions as operators. If we consider the Picturephone, without the use of individuals (such as another person to call and use video with), it is meaningless that the multiple pairs of telephone wires can carry the audio and visual information. Our connections to others are what gives meaning to these telecommunication technologies.

In fact, the Picturephone was grappling with similar issues that video conferencing continues to deal with today: privacy and connection overload (which, like everything else I’m talking through, are interconnected). While AT&T had monopolized the telegraph and telephone wires, meaning they had over 25 million people they could potentially connect through video (Gertner, p. 192), folks did not necessarily want to be connected and on video in their own homes. Even today, many folks feel like there is a context collapse—people on the other end of the video call are seeing a space that many consider to be their own, private space. Neither of these points even consider how our feelings of privacy are further contested in a digital information age (see articles on Zoom’s lack of end-to-end encryption and data management). In terms of connection, people also face greater fatigue in having to not only be present in a similar manner to other face-to-face conversations, but in having to expend more energy in trying to read social cues over video. Particularly aspects of nonverbal communication might be easier to understand over video, but this form of mediated communication does not replace proximity.

A patent for one of the later iterations of the Picturephone.
Figure 4: A patent for one of the later iterations of the Picturephone.

Phoning It In: Learning from Past Mistakes

The Picturephone is considered a technological failure to AT&T (who lost around $500 million on the device, (see Gertner) and others (Lipartito; Guernsey). As everyone is grappling with the COVID-19 global pandemic in 2020, video telecommunication is more integral to our daily lives than ever. By drawing on AT&T’s Picturephone, we can better understand how the technical replies upon the cultural for productive adoption and infiltration into everyday life. In a broader sense, relying upon examples of the past can help us better understand what is happening in the current moment. It can also help us understand how we got to the next new thing by examining how previous, “failed” technologies serve as an infrastructure for what we now take for granted.

One issue still occurring in our moment of video telecommunication technologies is an inequity of access. Although this essay attempts to unpack our “connections” to infrastructure and culture, in order to tease out the humanity in this historical narrative, I want to acknowledge that this “humanness” falls short. The dominant history of the Picturephone, like many earlier technologies, has been recorded that this technology was created by, marketed to, and used by primarily white men. An intersectional approach is not a strength of this narrative. While social movements, like the civil rights movement, occurred concurrently during the time period, there is also a dearth in accessible records from the time period. More research needs to be done to find information on if the 1964 World's Fair or early Picturephone booths were considered segregated spaces, and further explanation is necessary to further contextualize the time period in conjunction with the creation of this technology. All of these points feed into larger issues of systemic racism, which continually gets perpetuated through processes such as biased formats of inscribing history, issues with search algorithms, and more.

Ultimately, we expect to learn from our shortcomings and mistakes of the past in order to make video telecommunications as ubiquitous as it seems to most people (particularly in the Western world). However, there is still plenty of work to be done, and it begins with our connections bringing meaning to the infrastructure and culture around video telecommunication technology.


 

Bibliography

“1870s-1940s Telephone.” Imagining the Internet: A History and Forecast,

https://www.elon.edu/e-web/predictions/150/1870.xhtml. Accessed 4 Aug. 2020.

Bowker, Geoffrey C., and Susan Leigh Star. (1999). Sorting Things Out: Classification and Its Consequences. MIT Press.

Burns, Russell W. (1998). Television: an international history of the formative years. No. 22.

Coopersmith, Jonathan. (2015). Faxed: The Rise and Fall of the Fax Machine. JHU Press.

Davis, C. G. (1969). “Getting the Picture.” Bell Laboratories Record, pp. 143–48.

“Dial a Friend’s Face! Picture Phone Service Is Begun.” (1970). Chicago Tribune, p. 1.

Dorros, Irwin. (1969) “Picturephone.” Bell Laboratories Record, pp. 137–41.

“Gawkie-Talkie Telephone Is Here at Last! It Not Only Hears You, It Sees You, Too!” (1955). Chicago Tribune, p. B7.

Gertner, Jon. (2012). The Idea Factory: Bell Labs and the Great Age of American Innovation. The Penguin Press.

Glasgow, Jesse. (1971). “C.&P. Man Sees Future In Picture Phone.” The Sun, p. K7.

Gould, John. (1956). “Picture, Please! Now You’ll Be Able to See While You Talk on the Newest Version of Mr. Bell’s Invention.” The New York Times, p. SM11.

Graham, Stephen, and Nigel Thrift. (2007). "Out of order: Understanding repair and maintenance." Theory, Culture & Society, 24.3. p. 1-25.

Guernsey, Lisa. (2000). “Cautionary Tale: The Perpetual Next Big Thing.” New York Times, p. G8.

Hanhardt, John G (1981). "The First Mechanical Television." Journal of the University Film Association 33.2. p. 33-34.

Harris, J.R. and R.D. Williams. (1969) “Video Service for Business.” Bell Laboratories Record, pp. 149–53.

Janson, Donald. (1970). “Picture-Telephone Service Is Started in Pittsburgh.” The New York Times, p. 1.

Korn, F. A., and A. E. Ritchie. (1969). “Choosing the Route.” Bell Laboratories Record, pp. 157–59.

Lipartito, Kenneth. (2003). “Picturephone and the Information Age: The Social Meaning of Failure.” Technology and Culture, vol. 44, no. 1, pp. 50–81.

Lee, John M. (1964). “Mechanical ‘Brains’, Lasers and 2-Way Picture Phones Are Shown by Industry.” The New York Times, p. 25.

MacDougall, Robert (2014). The People's Network: The Political Economy of the Telephone in the Gilded Age, University of Pennsylvania Press.

McGoogan, Cara (2016). “Who Invented the Television? How People Reacted to John Logie Baird’s Creation 90 Years Ago.” The Telegraph, https://www.telegraph.co.uk/technology/google/google-doodle/12121474/Who-invented-the-television-John-Logie-Baird-created-the-TV-in-1926.html.

Molnar, Julius P. (1969). “Picturephone Service- A New Way of Communicating.” Bell Laboratories Record, pp. 134–35.

“N.J. Couple Selects Home by ‘Picturephone.’” (1965). The New York Times, p. R6.

Parks, Lisa, and Nicole Starosielski, eds. (2015). Signal traffic: Critical studies of media infrastructures. University of Illinois Press.

“Phone That Transmits Pictures Along With Sound Is Developed.” (1956). The New York Times, p. 29.

“Picture Phones Go Into Service: Mrs. Johnson Is One of First to Use New Device.” (1964). The New York Times, p. 24.

“Picture-Phone Service.” (1970). South China Morning Post, p. 1.

“Picture Telephone Ready Next Month; Will Link 3 Cities.” (1964). The New York Times, p. 39.

Roberts, Ivy. (2017). “‘Edison’s Telephonoscope’: The Visual Telephone and the Satire of Electric Light Mania.” Early Popular Visual Culture, vol. 15, no. 1, pp. 1–25.

Schnaars, Steve, and Cliff Wymbs. (2004). “On the Persistence of Lackluster Demand—the History of the Video Telephone.” Science Direct, vol. 71, pp. 197–216, doi:10.1016/S0040-1625(02)00410-9.

“Scientists See Picture-Phone System On Way.” (1956). New Journal and Guide, p. D5.

“Sending X-Ray Pictures by Phone Tested By Radiologists Here and in Philadelphia.” (1956). The New York Times, p. 60.

Shampo, Marc A., Robert A. Kyle, and David P. Steensma. (2012). "William Shockley and the Transistor." Mayo Clinic Proceedings. Vol. 87. No. 6. Elsevier.

Sloane, Leonard. (1956). “Picturephone Helps to Sell Over Hundreds of Miles.” The New York Times, p.F1.

“Television Phone Used From Fair to California.” (1964). The New York Times, p. 31.

“The Evolution of Picturephone Service.” (1969). Bell Laboratories Record, pp. 160–61.

“The Historical Brands of AT&T.” Att.Com, https://about.att.com/innovation/ip/brands/history. Accessed 4 Aug. 2020.

Turi, Jon. (2014). “Look Who’s Talking: The Birth of the Video Phone.” Engadget, https://www.engadget.com/2014-09-07-look-whos-talking-the-birth-of-the-video-phone.html.

“Video Phone Held an Aid to Transit: Wide Use of New Instrument Could Reduce Traveling.” (1966). The New York Times, p. 86.

“Westinghouse Tests New Phone Units.” (1969). The New York Times, p. 51.

 

Malinda Dietrich (September 2020). “Cultural Networks: Infrastructural Implications of AT&T’s Picturephone.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 35-49.


About the author: Malinda Dietrich is a PhD student in the Communication department at the University of Colorado Boulder. She is interested in, most broadly, communication technologies. When not attempting to expand this project around AT&T’s Picturephone, she also working on a project on Bill Gates’s Open Letter to Hobbyists (as a computational historiography about software development), as well as a project that attempts to define the meaning of “data” from different people’s perspectives.


 

From Telecommuting to Mobile Work: The IBM Experience, 1890s-2020

James W. Cortada, Senior Research Fellow, Charles Babbage Institute, University of Minnesota

Abstract: IBM was an early practitioner of remote working, beginning in the 1890s, but expanding this way of working in the 1980s. Customer engineers, programmers, systems engineers, salesmen, and consultants participated. Mobile work posed its own operational problems but offered benefits for improved service, productivity, and employee morale. However, its motives and practices remained controversial.

(PDF version available for download.)

 

IBM brick 2
IBM "Brick" used by field engineers to communicate, receive assignments, and to order parts.

Introduction

With the pandemic in full swing this spring, many people began working from home, or elsewhere other than their office. The opulent campuses in Silicon Valley emptied and to the shock of many IT people it was a radical change. As of March 2020, it appeared that five million US employees worked from home, representing 3.6 percent of all labor. Other data suggests that up to 43 percent of office workers sometimes work from home, which sounds more realistic than 3.6 percent of all labor. IBM recently bragged that hundreds of thousands of its workers worked remotely, but then within a couple of months that many would be brought back into offices. IBM had made a similar announcement in 2017 after having bragged about how extensively its staffs worked remotely. But IBMers have been doing this since the 1890s, before they became C-T-R in 1911 and renamed IBM in 1924, and long before the Internet, iPhones, or clunky laptops. So, it appears we need some history.

Providing Onsite Service

IBM has always been an ecosystem of different types of workers: headquarter office employees, programmers and engineers in research laboratories, factory workers, customer engineers (CEs—they installed and repaired equipment and software), sales personnel, and consultants. CEs (under different titles over the century) worked since the 1890s wherever customers had tabulating equipment, later computers. About once a week they came to an IBM office to do their paperwork and meet with their managers. That continued right through the twentieth century. In research for this article I queried retired CEs. One who had started in the 1950s explained that his “tech support,” i.e., how he interacted with “Dispatch” and his manager, was “a phone booth.” Until the 1980s, they filled out “IRs,” on cards, which were “incident reports” with information on the nature of their work, repairs, customer name, and so forth. For decades, CEs kept Dispatch informed of where they were working, received instructions of where to go next, could check in to see about the availability of parts, and to otherwise keep these dispatch centers informed.

Then in 1983, the Field Engineering Division introduced the Data Communication System (DCS, Motorola KDT800 terminal), better known by several tens of thousands of employees as the “Brick,” because it looked like one. Built by Motorola operating in a radio network run by that company, it worked well and was used much like a tablet, continuing to minimize trips to the office, except to pick up parts and attend monthly meetings. In 1997, Motorola introduced a new generation of the machine. IRs and all manner of Dispatch and management communications were done remotely.

California Surfing on a Terminal

In 1979 a different chapter in IBM’s mobility history began when its laboratory at Santa Teresa in Silicon Valley gave five employees terminals to work at home. Productivity and morale was high, the technology functioned, and other lab employees there and in other locations began doing the same. By 1983, an estimated 2,000 employees in the U.S. worked a combination of at office and home. Their ability to program remotely expanded through the 1980s, facilitated by faster dial-up lines over the years and the ability to attach modems to PCs during the second half of the decade. Simultaneously, IBM expanded its e-mail network (PROFS) worldwide, with incremental ability for employees to access e-mail from home. Anecdotal evidence suggests programmers liked working remotely. Some worked at an IBM facility, but also had access to files and mainframes from home.

On critical projects, such as with product development and support, technical staff found it useful to have access to mainframe-hosted files after hours to troubleshoot issues at night or on weekends using PCs, initially with dial-up connections. That sped up analyses of problems, instead of waiting to do so after driving back to a plant location, such as at Boca Raton, Florida (home of PC development in the early 1980s) or to the Rochester, Minnesota facility, where additional collaboration on PC and other related issues required coordination between the two sites. They built on the experiences of programmers of the 1970s at the Palo Alto, California site that had used TTY terminals to develop APL.

But What About the Sales Force? Weren’t They Mobile?

In the 1910s, IBM had sales facilities called branch offices that looked more like retail operations, where customers came to learn about products. By the end of the 1920s and extending to the 1990s, branch offices existed where sales staff had either a desk in “bull pens” or offices, but they spent only about a third of their time in these facilities. The rest was spent traveling from customer to customer, working out of their buildings, communicating to other IBMers via telephone, later also with their IBM-issued laptops (1990s), because in their trade face-to-face communications with customers proved crucial. They and their customers also visited IBM’s education centers and labs to learn about new products and applications. Every manufacturing site entertained its customers this way. So, salesmen had considered themselves remote workers for nearly a century.

As branch offices were dismantled in the 1990s, replaced with hotel-styled workspaces that became popular with many companies, sales staff came into IBM facilities less frequently. They came in to pick up their mail, use a conference room to host meetings with customers, and kibitz with colleagues. By then, they were interacting with their managers via telephone, rarely in person. By the end of the 1990s it was not uncommon for a salesman or consultant to say that they had not seen their manager face-to-face in six months, although communications via telephone and e-mail were frequent, at least once a week. That reality still exists today. The lower in an organization one was the more likely they worked remotely without a desk or office to claim in an IBM building. In all eras, by the time one became a second line manager they had an office, although even that practice declined in the early 2000s.

Big Blue Goes All Out With Mobility

In the late 1980s IBM entered a period when revenues and profits began declining sharply, forcing management to cut expenses. The Real Estate Division, which was responsible for building factories, offices and laboratories and for renting space was put under enormous pressure to sell-off some, which it did. But what does one do with all those employees who used to go to large office towers in Paris, New York, Chicago, and elsewhere? The problem involved field personnel—salesmen, systems engineers, their management, and their administrative staffs. In the United States, beginning slowly largely in the Midwest, customer-facing employees and their administrative staffs were issued PCs, and IBM began paying for their slow dial-up lines. Some of these lines were also set up in customer facilities. One employee working with Kodak in 1994 recalled, “When I started out working from home IBM and TSS (a Kodak-IBM joint venture) offered us office furniture, printers, computers, paper and paid for by the company along with reimbursements for Internet access and voice telephone home office charges.” Later, the company was less generous. Salesmen, systems engineers, and consultants began working remotely in the early 1990s and by the end of the decade tens of thousands, rarely coming into an IBM office.

It worked again. People’s commutes declined; time spent driving to work often was now devoted to doing IBM’s. One employee recalled that everyone’s productivity “went through the roof.” Work/life balance improved, while managers who initially thought less work would be done learned that the opposite was true. Prior to going remote various surveys of how people spent their time at work reported that a third was consumed by internal meetings and paperwork. Meetings converted to conference calls, which all had to be scheduled, so it seemed the number declined, while remaining paperwork was sped up with email, automation, and online processing. IBM was able to discard over $2 billion in property, some 58 million square feet.

But there was an ugly side to the exercise. One employee involved in some of its earliest iterations explained: “As someone who was involved in IBM’s Mobility program when it was begun around 1993, the entire focus was to equip customer facing folks with the tools, laptops, cell phones, etc. to work from customer, office and home as needed, but it was never intended as a full-time work-at-home program. But when Real Estate found out how much savings $$ they could [gain] by ending leases and selling buildings they forced the issue.” He added, “after IBM booked all those Real Estate savings, they realized they had created something that disconnected IBMers from the Company, community and teams, and they tried to rectify that, but it wasn’t very successful.” Some employees, however, reported no loss of loyalty or connection to IBM. Many found working remotely a positive experience, as one put it, “It was good for me professionally as well as for my family.”

New Era, Consultants and More Mobility

When IBM entered the consulting business in the 1980s, and then in 2002 acquired 30,000 employees from PriceWaterhouseCoopers (PwC), the field force was well over 200,000. All were armed with laptops loaded with consulting and sales tools, other software tools, access to the Internet and to IBM’s myriad internal databases, and email. (My laptop from the 1990s above.) By the end of the 1990s, there were tens of thousands of employees who rarely—if ever—entered an IBM office, other than during the first couple of weeks of initial employee onboarding activities. By 2009 IBM reported that, “40 percent of IBM’s some 386,000 employees in 173 countries had no office at all.”

IBM learned much during its first three decades of the modern era of working remotely. Employees put in as many hours (or more) as they had in offices and plants, often diverting time spent commuting now to work; hence the company got more hours from their employees without IBMers feeling overworked. They worked in more optimal ways; morning people by 9:30 had been “at work” for three hours; other employees were night owls and so they were working at midnight. Most loved the 20-second commute from their kitchens to their “IBM office.” They could attend their children’s school events and be productive because they had more control over their calendars, their time. It worked. IBM lost some of the spontaneous serendipitous opportunities to convene impromptu meetings to banter around new ideas, although that could take place; it just had to be scheduled more than before. One had to be proactive, but also to expect imagination.

Learning Lessons

Management had much more to learn, however. People needed job descriptions, incentives, and appraisals that valued self-reliance and personal commitment to their jobs. So, hiring self-starters proved crucial, not simply bright people who needed to be told what to do. Those that did not fit the profile disliked this mode of work, often feeling isolated, and so drifted away. Management let it happen. Most employees adapted to the new way of working. Consultants from PwC came from a culture of working remotely at client locations. Everyone needed digital plumbing: high-speed internet access paid for by IBM (not employees), powerful laptops and state-of-the art smart phones. The company learned to budget for these, and whenever some manager did not, employees rightfully complained, or quit. It had to be a commitment made at the top of the firm, a lesson the company partially ignored in the 2010s when it went through another round of challenging business times.

Managers had to host online events to maintain community. I used to hold virtual staff meetings at a time agreed to by my teams, which I insisted we would hold “whether we needed to or not.” Every one of those staff meetings was always packed with things to talk about. My colleagues in management had the same experience. When online face-to-face meetings became available in the 2000s, these became more relevant, impactful, and employees looked forward to these. We are after all social creatures. It proved important to use such events to maintain cohesion, group spirit, and teaming.

In 2012 IBM entered a new period when revenues started to decline every quarter with the exception of one until 2020. Profits shrank, too, in the post 2017 period. To protect its balance sheet IBM went through relentless rounds of reducing operating costs. These involved attempting to sell off less profitable lines of business, such as the PCs and DASD, to dispose of buildings (largely factories), and to continuously lay off employees. An estimated 100,000 were laid off or otherwise pushed out of the company since 2010/2012. Nowhere was this more evident than among the ranks of the CEs, salesmen, systems engineers (now called IT architects), consultants, and administrative staffs. Factory workers were disposed of through sales of factories with bodies and buildings.

Another form of remote work that began in the 1990s and expanded in the 2000s receives little attention in discussions about mobile work. It is the wholesale movement of tasks from one country, or division, to another, famously Indian Help Desks at Microsoft, IBM, and other companies that come to mind. Labor costs were less in India—often 80 percent than in more “advanced” economies—and telecommunications were available and cheap. So, product development and customer and employee support services moved incrementally out of the U.S. and Europe to India. IBM India went from several thousand employees in the early 2000s to a rumored (never officially announced) 150,000 within a decade, and now closer to 100,000. Centers of Excellence, while still physical locations as they had been since the 1980s, interacted with customers and other IBM divisions more remotely over time. IBM was able to reduce its workforce population in the U.S. while still servicing its American markets, although often American IBMers and customers complained about eroding quality of support. Several tens of thousands of American workers were displaced between the late 1990s and the 2010s though this process. At IBM India, however, employees worked in cubicles on IBM campuses.

New Times New Issues

An important component of IBM’s workforce “rebalancing,” announced in 2017 involved forcing some groups of workers to stop working remotely, initially marketing and communications staffs. They were given the option of reporting to work at offices, often in other cities, which would require them to move their homes and families, normally at their own expense. If they chose not to do so, often within 30 to 60 days, they were considered to have “resigned” from the company. The strategy was implemented in waves between 2017 and the present, with the result that many employees in their ‘40s and ‘50s left IBM, rather than upset their personal lives. One press report stated what many IBMers were saying on their websites that this move back to the office was “a veiled method of shedding workers.” (Vassel, 2020). IBM said it was to improve productivity and collaboration for improved innovation. The media reported that various studies essentially demonstrated letting employees work wherever suited them improved performance and productivity. In 2017, IBM selected some 5,000 workers for a return to offices. The optics remained ugly.

Then the pandemic in 2020 led some 350,000 IBM employees to work remotely around the world. Many already had the necessary technical infrastructure and work culture to do so. But by late spring it began to appear that perhaps some could come back to work, so the previous initiative of consolidating work renewed, while at the same time the firm continued its decade-long process of laying off employees, now many who worked remotely. The objections from employees and recently laid off employees proved intense since at least 2018, so a familiar refrain. One programmer commented, “Yep, IBM’s ‘Back to the Labs’ mandate convinced me to retire in 2018. So their ‘real’ program worked in my case.” Another in 2020 opined that, “this struck me as a disguised layoff targeting people in their 40s and 50s. Also hit people in their 30s.” Another framed the issue differently: “Let’s be honest. IBM had too much real estate it could not sell so it forced employees back to the office and lost thousands of outstanding professionals.” Others defended remote work: “I worked from home for IBM my last 18 years. My productivity was through the roof. They forced us back into open area office space my last 2 years,” and as a result, “tanked moral and IBM has lost a lot of good people.”

While employee suspicions were probably more true than not, also that there was growing evidence that people working in proximity to each other did improve creativity—important as innovation in AI and other forms of IT were needed—IBM’s senior management had lost too much credibility for the press and so many employees opining on websites. These decisions to consolidate back to offices were made largely during the tenure of CEO Virginia Rometty.

IBM was not alone in its long journey through remote work. Just as companies took notice in 2017 and again in 2020 when IBM reversed its remote working models, other companies and government agencies tried to learn more. But some also had a history of working remotely. Office and “high tech” staffs explored possibilities of remote work, largely beginning in the 1970s. One voice that solidified much of the early thinking around why and how to do this was Jack Nilles, known widely as the “father of telecommuting,” who spent the bulk of his career as an engineer associated with aircraft and rocket projects and the 1970s as Director for Interdisciplinary Research at the University of Southern California. Since the 1980s he consulted about telecommuting, authoring books and articles along the way. His publications explained the concept and how to implement it. He helped spread the word among large American corporations about the benefits of remote work, often referencing other initiatives, including IBM’s.

When IBM announced again in June, 2020—as the pandemic was spreading faster than in the previous two months—that it was retreating from remote work, press coverage was of surprise, just as other companies were debating whether to extend remote work beyond Covid-19. Employees and recently retired IBMers complained again, while press coverage expressed puzzlement. IBM felt compelled to trot out its chief medical officer, Dr. Lydia Campbell, to explain. CNN quoted her, “I think we realize at IBM and what most large employers realize is that this pandemic is going to make us all think about new ways of working and different approaches to work.” (Vassel, 2020).  By the end of the summer IT workers all over the world were complaining, often refusing to work at any company campus. Management in such firms had no choice but to let their employees work remotely.

Other companies were moving ahead, experimenting, as it was a new way of working for them. The Society for Human Resource Management announced that two-thirds of American companies were working remotely for the first time. CNN reported that Silicon Valley was notorious for resisting attempts to allow their employees to work remotely, and so had little experience, certainly not to the extent as IBM. One compelling reason for the change that struck Silicon Valley workers had dawned on IBMers as early as the 1990s: “It makes no sense paying Bay Area rent if we can earn our salary living elsewhere.” Members of the IT industry were moving in that direction, despite IBM’s opposite march. That they were not following IBM’s lead was an exception to decades of seeing Big Blue as innovative and progressive.

History’s Insights

Historians can draw several lessons from this story. The history of mobile working is fraught with complexity in implementation, impact on work productivity, employee morale, role of technology, and company culture. Each of these categories of consequences had varied supporters and critics. Noble and sinister motivates were apparent and hidden. How management worked changed, while the career and political power of employees did too in myriad ways. It is a history that we know very little about, but that goes to the heart of how organizations functioned in the last third of the twentieth century and will probably for years to come. It warrants the attention of sociologists, business management, and historians.


Bibliography

Cortada, James W. (2019). IBM: The Rise and Fall and Reinvention of a Global Icon, MIT Press.

Global Workforce Analytics. (2020). Latest Work-At-Home/Telecommuting/Mobile Work/Remote Work Statistics.

Messenger, Jon C. (2019). Telework in the 21st Century: An Evolutionary Perspective, Edward Elgar.

Meyers, Glenn E. (1999) IBM Field Engineering Experiences: A Personal Memoir, IEEE Annals of the History of Computing, 21. No. 4, 72-76.

Mullen, Regina. (June 2, 2017).  IBM Shutters Remote Work: Should You Too?, Replicon blog.

Nilles, Jack M. (1994). Making Telecommuting Happen: A Guide for Telemanagers and Telecommuters, Van Nostrand Reinhold.

__________ (1998). Managing Telework: Strategies for Managing the Virtual Workforce, Wiley.

__________ (2007). The Telecommunications-Transportation Tradeoff: Options for Tomorrow, BookSurge.

Pardes, Arielle. (May 15, 2020). Silicon Valley Rethinks the (Home) Office, Business.

Sak, John C. (2018). The Computer Guy Is Here!: Mainframe Mechanic, Self Published.

Streitfeld, David. (June 29, 2020). “The Long, Unhappy History of Working From Home,” New York Times.

Useem, Jerry. (November 2017). When Working From Home Doesn’t Work: IBMPioneered Telecommuting. Now It Wants People Back in the Office, The Atlantic.

Vassel, Kathryn. (June 25, 2020). IBM’s Chief Medical Officer: We Won’t Rush to Bring People Back, CNN.

 

[Note: Quoted material from the author's recent survey of IBMers on Facebook.] 

James W. Cortada, (August 2020) “From Telecommuting to Mobile Work: The IBM Experience, 1890s-2020.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 23-34.


About the author: James W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing in business. He is the author of IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). He is currently conducting research on the role of information ecosystems and infrastructures.


 

Charles Babbage’s Ninth Bridgewater Treatise

Margaret Dykens, MLIS, MS, Curator and Director of the Research Library San Diego Natural History Museum

With preface by Amanda Wick, Interim Archivist, Charles Babbage Institute Archives

(PDF version available for download.)

 

Who was Charles Babbage?

Charles Babbage, Victorian scientist and mathematician, was born on December 26, 1791 to a family of London bankers. Fascinated with mathematics, and especially algebra, he studied the subject at Trinity College, Cambridge. While attending Cambridge, he co-founded the Analytical Society for promoting continental mathematics and reforming traditional teaching methodologies of the time. Many of these methods are still used in some form today in the instruction of algebra.

Following completion of his degree, Babbage worked as a mathematician for the insurance industry. He was elected a Fellow of the Royal Society in 1816 and played a prominent part in the foundation of the Astronomical Society (later Royal Astronomical Society) in 1820. As a member of the Royal Society during the heady days of the early 1800s, Babbage came into contact with a number of great thinkers and engaged in a robust correspondence with fellow mathematicians, naturalists, and philosophers—including Sir William Herschel, Charles Darwin, and Ada Lovelace.

In 1821 Babbage invented the first of his two calculating machines, The Difference Engine, which would quickly become his singular passion and focus. The function of the Difference Engine was intended to compile mathematical tables and, on completing it in 1832, he began work on a more complex and multifunctional machine that could perform any kind of calculation. This was the Analytical Engine (1856) and its invention is widely considered to be the founding of the field of modern computing.

Today, little remains of Babbage's prototype computing machines and, unfortunately, critical tolerances required by his machines exceeded the level of technology available at the time. Though Babbage’s work was formally recognized by respected scientific institutions, the British government suspended funding for his Difference Engine in 1832, and after an agonizing waiting period, ended the project in 1842. Though Babbage's work was continued by his son, Henry Prevost Babbage, after his death in 1871, the Analytical Engine was never successfully completed, and ran only a few "programs" with embarrassingly obvious errors.

Despite his many achievements in mathematics, scientific philosophy, and his leadership in contemporary social movements, Babbage’s failure to construct his calculating machines left him a disappointed and embittered man. He died at his home in London on October 18, 1871.

What’s in a name?

The calculating engines of English mathematician Charles Babbage (1791-1871) are among the most celebrated icons in the prehistory of computing. Babbage’s Difference Engine No. 1 was the first successful automatic calculator and remains one of the finest examples of precision engineering of the time. Babbage is sometimes referred to as "father of computing." The International Charles Babbage Society (later the Charles Babbage Institute) took his name to honor his intellectual contributions and their relation to modern computers.

Where is Babbage in the Archives?

Materials related to Charles Babbage are scattered around the world, with the vast majority of his personal papers and library held at the Science Museum of London and the British National Library. Although the Charles Babbage Institute is named after Charles Babbage, we actually have very little material originating with our namesake. What we do have are first editions of many of his books and journal articles and a number of these are inscribed with dedications to his patrons by the author. These rare materials constitute the earliest materials in our repository and, while used in classroom settings and on exhibit, rarely leave our vault. Our holdings of Babbage’s work include the following:

  • Babbage, Charles. On a Method of Expressing by Signs the Action of Machinery. London: [Royal Society of London], 1826.
  • Babbage, Charles. Reflections on the Decline of Science in England, and on Some of Its Causes. London: Printed for B. Fellowes (Ludgate Street); and J. Booth (Duke Street, Portland Place), 1830.
  • Babbage, Flather, Dodgson, Flather, John Joseph, and Dodgson, Charles. On the Economy of Machinery and Manufactures. London: C. Knight, 1832.
  • Babbage, Charles. Passages from the Life of a Philosopher. London: Longman, Green, Longman, Roberts, & Green, 1864.
  • Babbage, Charles. The Ninth Bridgewater Treatise: A Fragment. Second Edition., Reprinted. ed. Cass Library of Science Classics; No. 6. London: Cass, 1967.

What is the Ninth Bridgewater Treatise?

One of the titles in Babbage’s oeuvre that is uniquely significant is the Ninth Treatise of Bridgewater. This volume presents Babbage’s perspective on the Eight Treatises of Bridgewater—a series of work by multiple influential thinkers of the Victorian era on natural history, philosophy, and theology. Babbage’s contribution is not officially affiliated with the eight-volume series and was merely his own considerations on the topic. In his volume, which he titled the Ninth Bridgewater Treatise, he discusses his calculating machines and posits the idea of God as a divine programmer who established the rigid natural laws which govern humanity and civilization, in many it presents a case for Deux et Machina.

As a fragmentary piece, and one that does not dwell on mathematical or scientific subjects, this is rarity amongst Babbage materials. Our copy is a second edition and, while in excellent condition, it is not especially rare. Recently, the Curator and Director of the Research Center at the San Diego Natural History Museum, Margaret Dykens, experienced one of those once-in-a-lifetime finds when she reviewed an anomaly within their catalog, an edition of Babbage’s Ninth Treatise of Bridgewater that seemed to be a galley proof. As she notes in the following article, deep examination of the item by both herself and noted Babbage scholar, Dr. Doron Swade, made several incredible finds.


Bibliography

Charles Babbage Institute. (10 June 2020). “About Charles Babbage.” Charles Babbage Institute web site. http://www.cbi.umn.edu/about/babbage.html.

Swade, Doron. (12 June 2020). "Babbage, Charles (1791–1871), mathematician and computer pioneer." Oxford Dictionary of National Biography. 23 September 2004. https://www.oxforddnb.com/view/10.1093/ref:odnb/9780198614128.001.0001/odnb-9780198614128-e-962.

 

Amanda Wick (July 2020). “Charles Babbage’s Ninth Bridgewater Treatise.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 17-22.


About the author: Amanda Wick is the interim archivist at the Charles Babbage Institute Archives (CBIA) at the University of Minnesota. Prior to working at CBIA, Amanda led major processing projects at the University of Minnesota and managed the archives of the Theatre Historical Society. She obtained her Bachelor’s degree in Environmental Studies from Lawrence University (Appleton, WI) and her Masters in Library and Information Science from Dominican University (River Forest, IL).

 

Charles Babbage’s Ninth Bridgewater Treatise in the SDNHM Library

Margaret Dykens, MLIS, MS, Curator and Director of the Research Library San Diego Natural History Museum

Abstract: As a foundational figure in the history of science, Charles Babbage is best known for his contributions to computing. In fact, his mechanical, programmable calculating machines are considered precursors to modern computers. These accomplishments were the primary reason for the naming of the Charles Babbage Institute, and its archivists have sought to honor its namesake through the purchase of rare books authored and inscribed by him. One such book is a fragmentary oddity, the Ninth Bridgewater Treatise, and a copy owned by the San Diego Natural History Museum that was recently examined by curatorial staff and prominent Babbage scholar, Dr. Doron Swade, holds curious clues to Babbage's approach to natural philosophy. (KW: Babbage, Charles; Swade, Doron; computing history; rare books; antiquities; archives.)

 

Image of Treatise with hand-written pencil annotations

The Research Library of the San Diego Natural History Museum (SDNHM), founded in 1874, has extensive holdings of rare and antiquarian books, including natural history volumes dating back to 1514. The majority of these books were donated by various naturalists and philanthropists over the past one hundred years. One such naturalist was General Anthony Wayne Vogdes (1843-1923), a career Army officer with an active secondary career as a geologist and paleontologist. Vogdes was also an avid bibliophile and donated his extensive scientific library to the SDNHM after his death in 1923. One of the books from Vogdes’ library was a first edition of Charles Babbage’s Ninth Bridgewater Treatise (1837).

This particular volume was mentioned in a newspaper article published on January 11, 1896 in the San Francisco Bulletin, which described many of the most important books in Vogdes’ personal library. Babbage’s Ninth Bridgewater Treatise is mentioned in the list with the comment that it contained “annotations by the author.” The book in question appears to be a galley proof with wide margins and many hand-written pencil annotations, as well as marginalia likely written by the author.

hand-written letter bound into the book

There is also a portion of a hand-written letter bound into the book itself—Vogdes was an amateur book-binder and his library consists almost exclusively of his own bindings, many of which have notes, letters, images, or other memorabilia that he collected and bound into the text.

I was intrigued by the hand-written annotations and marginalia in Vogdes’ copy of the Ninth Bridgewater Treatise and contacted Dr. Doron Swade, preeminent Babbage scholar and retired curator of the Charles Babbage collection at the Science Museum of London, for verification of the handwriting. After emailing Dr. Swade several images of the annotations, he replied to me that it was highly likely that they were in Charles Babbage’s own hand, both because of the style of writing as well as the content itself.  To quote Dr. Swade:

Having gone through the 7,000 manuscript sheet (ms) of Babbage Scribbling Books the handwriting in what is visible on the folded manuscripts interleaved on page 128, and in the third image, looks very much like Babbage’s, as do the pencilled annotations.

But there is stronger evidence for the annotations and ms being his: in the preface ‘advertisement’ to the second edition Babbage states that the chapter ‘On Hume’s Argument Against Miracles’ has been ‘nearly rewritten’. The first image you sent with the pencilled annotations, which are surely from the first edition, correspond to changes made in the second edition. It is not credible that anyone other than Babbage would have made what are essentially editorial instructions, and editorial amendments, that were carried through to the second edition.

There is even more conclusive evidence in the sample page 131 where the pencilled annotations appear verbatim in the second edition, and the several pencilled deletions have also been carried through.

The ms in the third of the images you sent starts with the same opening sentence that appears in the second edition at the top of page 127 though what follows has been edited and amended. It could be that this is a sheet from the original manuscript for the first edition though not having access to a first edition I am unable to confirm this.

It is fair to conclude that the annotations are Babbage’s. It is difficult to see any other explanation.

Image of Treatise

Although I do not know how General Vogdes came to have this particular annotated first edition of the Ninth Bridgewater Treatise in his collection, I am not surprised as his entire library constituted over seven thousand scientific volumes on topics related to geology, paleontology, and other scientific and philosophical disciplines. Indeed, his personal library included works by Darwin, Hume, Dana, Agassiz, and Lyell as well as many other well-known natural historians and intellectuals.

We are hopeful that this unique source might be of interest to some Babbage researcher or historian. Any scholars interested in pursuing this topic further should feel free to contact me directly at the SDNHM Research Library.


Bibliography

Swade, Doron, Dr. “’Ninth Bridgewater Treatise.’ Message requesting assistance in authenticating possible rare volume by Charles Babbage.” Message to Margaret N. Dykens. January 2020. E-mail.

 

Margaret N. Dykens (July 2020). “Charles Babbage’s Ninth Bridgewater Treatise in the SDNHM Library.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 17-22.


About the author: Margaret N. Dykens received her Master’s degree in Biology from the College of William and Mary, Williamsburg, Virginia in 1980. Upon graduation, she was hired as Staff Illustrator at the Harvard University Herbarium. Margaret went on to earn a second graduate degree in Library Science from the University of Michigan School of Information in 1993. In 1997, she became the Director of the Research Library for the San Diego Natural History Museum (SDNHM). In addition to her work directing SDNHM, she has served as curator for two exhibitions; the first was The California Legacy of A.R. Valentien, based on the Museum’s fine art collection, where she toured with this exhibition to numerous venues across the U.S. In 2016, she also curated the permanent exhibition, Extraordinary Ideas from Ordinary People: A History of Citizen Science, based on fine art works, historical objects, and rare books from the Research Library.


 

Of Bugs, Languages and Business Models: A History

Alejandro Ramirez, PhD, Sprott School of Business – Carleton University

Abstract: A series of wrong decisions precipitated the Y2K crisis: adopting the 6-digit date format, using COBOL as the standard in business computing and discontinuing COBOL-teaching in many American universities shortly after it was adopted. Did we learn anything from this crisis? (KW: Y2K crisis, COBOL, Internet history, Outsourcing.)

(PDF version available for download.)

 

Y2K Time magazine cover
TIME magazine addressing Social misunderstanding (Courtesy of the of Charles Babbage Institute Archives)

Introduction

Twenty years ago, we averted the Y2K crisis. When we talk about the crisis, people are genuinely puzzled that it was a very expensive affair. They have a distorted idea about a crisis that did not happen, or how it was supposed to be the end of the world, but at the end, nothing actually happened. Then they wonder if something similar could happen again. That is really the crux of the matter: what did we learn from the Y2K crisis?

Knowing the history of this crisis is an important and serious endeavour. It is a benefit to understand how computer usage evolved, and what forces shaped our technology, our practices, and computers’ contribution to society. History becomes an indispensable light guiding us in this understanding.

What were they thinking?

Employment of personnel to use computers in businesses became widespread in North America with the introduction of the IBM 1401 in 1959. Before, if any, machine-based data processing generally was executed by electromechanical accounting machines. Calendar days, if needed, were fed via punched cards, indicating the date appropriate for that job. When programmers from the late 1950s to the mid-1960s decided that in order to save on memory costs (McCallum 2019), they will use only the last two digits of the year, i.e., 60 instead of 1960, they never imaging that their programs will still be running at the end of the 20th Century. After all, 40 years seemed a very long time, especially since they were saving approximately $16.00 USD per date by saving two bytes, 16 bits, of core memory valued at about one dollar per bit.

When IBM announced their new, more powerful System/360, with many innovative features compared to their 1400s technology, they also decided—in the interest of compatibility—that their system’s date would also be a 6-digit date. To cement this practice, on November 1, 1968, the U.S. Department of Commerce, National Bureau of Standards, issued a Federal Information Processing Standard which specified the use of 6-digit dates for all information exchange among federal agencies (FIPS 1968). The standard became effective in January 1, 1970, enshrining the 6-digit date standard by the bureaucracy of government, also with little to no thought of the year 2000.

It took about fifteen years for someone to realize that having a 6-digit date may be a problem. Unknown to most but a few programmers, Jerome and Marylin Murray published their call to arms Computers in Crisis: How to Advert the Coming Worldwide Computer Systems Collapse in 1984. They credited their daughter Rosanne, a senior research analyst at Systemhouse, Ltd., of Ottawa for the origins of the book: “This book may not have been undertaken were it not for a lengthy telephone discussion of the dating problem with Rosanne…Her interest and encouragement have been unflagging” (Murray & Murray 1984, p. xix).

Shortly after the book was published, Spencer Bolles posted on January 18, 1985, from his computer in Reed College in Oregon, the first recorded mention of the Year 2000 problem on a Usenet group: “I have a friend that raised an interesting question that I immediately tried to prove wrong.  He is a programmer and has this notion that when we reach the year 2000, computers will not accept the new date” (Bolles 1985).

Millennium bug guide Few were able to distinguish facts from fiction (Courtesy of Charles Babbage Institute Archives)
Few were able to distinguish facts from fiction (Courtesy of Charles Babbage Institute Archives)

Both, Spencer L. Bolles’ unnamed friend and Rosanne Murray seem to be the ones that started to worry about this issue. They are the ones that people do not know about, and perhaps with them, many others. But the prominence of the problem was best known once David Eddy called it “Y2K” (Rose 1999).  Before Eddy’s acronym, the problem was referred to as Century Date Change (CDC), Faulty Date Logic (FADL), Millennium Bug. For Eddy, “Y2K just came off my fingertips” explains Rose (1999).

Reading Murray and Murray (1984), it becomes apparent that regarding the state of computer resources in business not much has changed. The following paragraph can be found in today’s stories regarding the use of computers in organizations: “All this explodes in the midst of a world economy totally dependent upon computer resources for its survival and demanding of the services of skilled technical personnel whose availability is in dreadfully short supply” (Murray & Murray 1984, p. 221). In 1984 the comment was more about the skills needed to debug all those programs used by organizations using 6-digit dates. No one knew exactly how many programs needed fixing. The task was clear: “Our fault is in our readiness to ‘patch’ or treat symptoms until it is often too late to successfully eradicate the disease” (Murray & Murray 1984, p. 334). Its magnitude was not.

Cost-savings in core memory and using a 6-digit date standard imposed by government are pieces of evidence in the story of why the Y2K Bug was so problematic. To completely understand its ramifications, it is necessary to briefly talk about how Common Business Oriented Language (COBOL) became the Lingua Franca of business programming.

In answering the question many of us have asked regarding what programmers were thinking when they created their programs, it is important to remember that one works within the reality we create and within the space of actions allowed in a given domain (Heidegger 1993). Programmers must comply with current standards when writing their code. The 6-digit date was one of them.

Washington, we have a problem

At the beginning of the second half of the twentieth century, the computer started to move away from being only for science and engineering as more large corporations and government organizations adopted computers to become “the most vital tool of management introduced in this decade” (Lohr 2001, p. 44). It was entering into accounting, payroll, logistics, manufacturing and purchasing. But at that time programs were still a foreign language to managers and the need to use a language to solve management and business problems became essential.

As its name indicates, COBOL was a Common Business Oriented Language. It had everything to be rejected: it was a language designed by a committee, it was not intellectual enough to entice computer scientists to be interested in it, it was created for practical reasons and agreed upon computer vendors. Could there be a common language to have the business community working together? That was the premise in April 1959 for a meeting at the University of Pennsylvania computer centre.

The same year, in late May, the Pentagon hosted a 2-day meeting attended by most of the computer makers and some heavy computer users in businesses, a group of about 40 people. From this, and subsequent meetings, a ‘short-term’ committee of six was formed. Some names for this language were discussed, among them: Busy (Business System), Infosyl (Information System Language), Datasyl (Data System Language), and Cocosyl (Common Computer Systems Language). According to Grace Hopper, it was Robert Bremer who proposed the name Common Business Oriented Language (COBOL). The committee delivered “the business FORTRAN,” it was now up to vendors to adopt it and have their machines ready to be delivered to organizations COBOL-ready.

COBOL the language of Business applications (Public Domain)
COBOL the language of Business applications (Public Domain)

 

Computer vendors worldwide decided to use COBOL as the language for their business clients. And without questioning, businesses worldwide started to use COBOL as the language for developing any business solution. The language was simple enough that programmers were able to learn it quickly and join in the effort and suddenly, the business world was running on COBOL.

It is important to note that computer scientists were not particularly interested in a language created by a committee, but they saw the need to teach it in the curriculum for a few years, until more interesting academic languages were designed and attracted their attention: C, Algol, Pascal, C++. Slowly, COBOL courses started to dwindle in many universities, to a point that by the mid-1980s, there were not many computer science students who knew how to program in COBOL. It was clear that COBOL was the business FORTRAN, a huge achievement, especially in a field where users care more about their discipline (business) than about computers. It became clear that computers were useful tools, but tools, nonetheless.

The Achilles’ heel of COBOL was not its syntax but, its heavy reliance on the highly adopted 6-digit date standard to manipulate time-based business data. Once it became evident that many COBOL programs were vulnerable to the Y2K bug, the call for action became louder. In Murray and Murray’s own words: “This is a true computer crisis—it is application and nationality independent. It is worldwide. It has but one certain remedy—immediate action to terminate 6-digit date involvement in current system development and the scheduling of the ultimate conversion of existing systems” (Murray & Murray 1984, p. 334).

But, could this call to arms be enough to mobilize companies all over the world to find a way to solve the problem? For that, it was needed to estimate its costs. The bottom line was simple: what is more expensive, the problem or its cure?

As an example, Lyons (1981) talks about one Fortune 500 company, based in Chicago, that had a library of 50,000 COBOL programs. Each program with an average of 750 lines, giving an estimated 37,500,000 lines of code in that library. Having a hypothetical programmer able to debug 100 lines of code per day, that will leave him with 42.8 years working every single day, seven days a week. To finish the job in one year, you need to find another 42 equally efficient programmers and coordinate them to do the job! This, for one library, in one company! It was immediately clear that the job was momentous.

Suddenly the need for COBOL talent was clear, but there were not many new computer science students in the pipeline that knew COBOL. Most COBOL programmers were working in the field performing their daily tasks. They couldn’t be asked to stop working and start debugging. There were few COBOL professors left in a handful of universities that over night received job offers to head and manage COBOL teams tasked with debugging programs. It became impossible for universities to keep that unused talent in house, especially considering the salaries offered to them. Those organizations that did not want to enter the bidding salary wars for COBOL talent and decided to develop their own. Three years before the year 2000, according to a CIO quoted by Callaway (1997), “A good mainframe programmer today is worth his [or her] weight in gold.”

The Y2K bug was a costly undertaking, but it was vital to tackle it. Arranga and Price (2000) using a pop culture metaphor indicate the importance of the Internet, and the emerging World-Wide-Web in solving the Y2K problem: “Perhaps the most important Y2K side effect provided Cobol with something it had always lacked: a broad-based community of developers focused on providing Cobol with options. With too much code to be repaired manually, a market of software remediation tools came of age. The tools did not turn into pumpkins at the stroke of midnight. Instead they kicked off the other shoe, took off the gown, put on a Spidergirl outfit, and swung onto the Web” (p. 18).

Outsourcing: Solution or Problem? (This Photo by Unknown Author is licensed under CC BY-NC-ND)
Outsourcing: Solution or Problem? (This Photo by Unknown Author is licensed under CC BY-NC-ND)

The Internet, Y2K, and Outsourcing to India

Internet adoption all over the world became the norm. India was among the biggest adopters. Thomas Friedman in his 2005 best seller The World is Flat, includes outsourcing as the fifth, among ten ‘flatteners’ of his model explaining how the modern world became flat. In the story Friedman tells about outsourcing, the principal character is none other than the Y2K bug. According to him, suddenly, India became linked to North America through fiber-optic, just at the time that the Y2K bug became an urgent reality at one end of the link, India had a surplus of COBOL programmers at the other. “And so with Y2K bearing down on us, America and India starting dating, and that relationship became a huge flattener, because it demonstrated to so many different businesses that the combination of the PC, the Internet, and fiber-optic cable had created the possibility of a whole new form of collaboration and horizontal value creation: outsourcing” (Friedman 2005, p. 108).

If the Y2K bug was a big problem for the West, it became a big opportunity for India. Suddenly every single computer running COBOL programs in the West needed to be reviewed. The enormous task of checking programs line-by-line required an equally enormous number of qualified people, these people were available in India. This gave Indian IT companies the opportunity to work side by side with the largest Western corporations. Before Y2K, India was producing many IT professionals hoping that they will find a position somewhere in the west. And many did, but thanks to the Y2K bug, suddenly these IT professionals did not need to leave India to work for these corporations.

Outsourcing from America to India, as a new way to collaborate exploded after the year 2000. Remember, soon after the year 2000, the dotcom bust shocked many companies that had emerged around the time of the Y2K. This bust, a problem for many start-up companies in North America, was another opportunity for Indian companies. Since these companies were already linked to the West, the bust made the cost of using these links virtually free.

Lessons…learned?

Shortly after it was clear that the Y2K bug problem had been solved, Kappelman (2000) made an evaluation of what he called “Some Strategic Y2K Blessings.” He starts by listing a series of issues that became evident to the profession while working on solving the problem: “Y2K showed everyone the importance of systems and software in enterprise success…the value of inventorying and tracking IT assets, maintaining standards for software processes including careful version documentation, quality testing practices and independent validation, simplicity in software and systems, clearly defining project management, and a reasonable balance between centralization and decentralization” (p. 42). Somehow, along the way, working on the solution, other issues became more prominent and it seems that the profession forgot about those standards that were imposed in their practice, i.e., using a 6-digit date standard, and using COBOL as a de facto language for business applications. In hindsight adopting those standards was more a problem than a solution.

Kappelman (2000) indicates that these projects “were extensive and complex” and due to “lamentable software practices” correcting them was “very difficult” (p. 42-43). Those projects consumed more than half of the IS operating budget. He indicates that the Y2K total global cost was between $375 and $750 billion dollars (p. 43). What was remarkable, was to find out that “COBOL accounts for approximately 34% of all applications although only approximately 16% of all professional programmers work with it” (p. 34). It is really unbelievable that with such a disadvantage, the world was able to avoid major consequences caused by the 6-digit date standard. We can think of the Y2K issue as a major technological “spring-cleaning affair” just before the new millennium. It was an enormous cleaning task! (See Table 1)

Magnitude of the Y2K Mess

Outsourcing became the new frontier for American companies. Even though it has declined, it is now one among many solutions companies use to become more competitive. Looking for ways to cut cost by taking jobs away from middle class Americans became a liability. Outsourcing was a solution to a lack of talent problem at that time; regardless, some companies abused it by following the mantra, “I better start outsourcing as many functions as I can…so what can be outsourced must be outsourced” (Friedman 2005, 135), even if talent was available in America.”

Outsourcing became an ‘India versus Indiana’ situation. Friedman (2005) describes a real-world example. When in 2003, the state of Indiana decided to move its unemployment claims to a computer system due to the increase of such claims due to firms that were outsourcing. To do so, it put out to bid a contract. The contract was won by Tata America International, the US-based subsidiary of India’s Tata Consultancy Services Ltd., located in New York City (Friedman 2005, p. 240). Its bid was $8.1 million lower than the closest one, a joint effort of Deloitte Consulting and Accenture Ltd. No firms from Indiana bid for the contract. An Indian company won, and the work was outsourced to their Indian headquarters. But what is ironic, is that Indiana was outsourcing the department that was responsible for dealing with the claims of the Indiana workers affected by outsourcing!

Sadly, the story does not end there. When the details of the contract became public, Republicans made it a campaign issue. It became a political nightmare for the Indiana Governor, a Democrat. The contract was cancelled at the end, Tata received almost one million dollars to cover some expenses incurred. A new set of smaller contracts were put out to bid, so Indiana firms could compete. At the end, the contract became more expensive (and inefficient), but it kept the politics at bay.

Alan Blinder (2005) decided to make, what he called, a bold prediction: “In the future, and to a great extent already in the present, the key distinction for international trade will no longer be between things that can be put in a box and things that cannot. It will, instead, be between services that can be delivered electronically over long distances with little or no degradation of quality, and those that cannot. The tradability of a vast array of services is, as they say, the New New-Thing. And there is little doubt that the fraction of services that can be delivered electronically will grow” (p. 6). That perhaps is the lesson that was hardest to learn, but organizations learnt it very well.

Conclusion

Twenty years after the Y2K problem was solved, information systems have, more than ever, a pervasive presence in every organization. It seems those systems are not only adopted, but they are transforming, sometimes, those organizations to a point that the new trend in businesses is digital transformation. In these twenty years, what have we learned? Are we confident that we will not face another doom’s day because of the programs running our firms?

It is difficult to assess if that is the case. It is clear that the success stories of the first two decades of the 21st century are computer-based business models; Google is the new library, Amazon is synonymous with online shopping, Twitter is the new Fourth Estate, and Facebook is the new Commons. Let’s not forget that these companies rely on the algorithms that make them what they are. Those algorithms were coded in computer languages, some of them use directly or indirectly COBOL sources.

The echo of Santayana’s (1905) maxim “those who cannot remember the past are condemned to repeat it,” is constantly present. It is important to learn the lessons of the Y2K bug. What the programmers were thinking when creating their applications in the late 1950s and early 1960s. How COBOL became a standard in business computing. How the Internet came to the rescue. How outsourcing, as a business model, emerged. All of these important reminders of depending on technology mediating human relations.

What can we learn from the Y2K crisis? By tracing the contribution of those, who by becoming aware of a looming crisis, rang the warning bell to save us all, those who proposed a solution, those who implemented it, those who quantified it, and those who took advantage of new business models emerging from that crisis. It is clear for us that this story of bugs, languages and business models is a way to keep the history of a crisis averted, a crisis that some call Y2K: The bug that failed to bite (Story & Crawford 2001) ignoring the large amount of evidence that if the bug didn’t bite is because the systems were debugged and there were few instances where it was not fixed, such as the welcome panel at Nantes’ Central School on January 3, 2000.

By Bug de l'an 2000 - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14719963
By Bug de l'an 2000 - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14719963

If we ignore the evidence and forget the history of this event that united business and computer professionals, then we most likely will face a similar situation. Another by-product of this lesson is to see how some solutions to faced problems become problems themselves, i.e., outsourcing. Students particularly need to learn this history, be inspired by it, and be motivated to continue the work of building safer and better systems. Managers as well can learn about understanding the systems they adopt, and being proactive to have in house, skills needed to guarantee that such systems run impeccably.


Bibliography

Arranga, Edmund C. and Wilson Price. (March-April 2000). Fresh from Y2K, What’s next for Cobol? IEEE Software, Focus – Guest Editors’ introduction, 16-20.

Blinder, Alan S. (December 2005). Fear of Offshoring, Princeton University Center for Economic Policy Studies, Working Paper, 119.

Bolles, Spencer. (January 18, 1985). Computers Bugs in the Year 2000. Newsgroup: net.bugs. Usenet: 820@reed.UUCP.

Callaway, Erin. (March 24, 1997). COBOL comeback, PC Week, 131+, https://link.gale.com/apps/doc/A19246435/CPI?u=ocul_carleton&sid=CPI&xid=c2d04124

FIPS. (1968). Federal Information Processing Standards Publication 4. US Department of Commerce, National Bureau of Standards.

Friedman, Thomas. (2005). The World is Flat: A brief history of the Twenty-First Century, Farrar, Straus and Giroux.

Heidegger, Martin. (1993). The question concerning Technology, in D. F. Krell (Ed.) Basic Writings: Ten Key Essays, plus the Introduction to Being and Time, Revised & Expanded Edition, Harper Collins:307-341.

Kappelman, Leon A. (March-April, 2000). Some Strategic Y2K Blessings, IEEE Software, 42-46.

Lohr, Steve. (2001). Go To: The story of Math majors, Bridge players, Engineers, Chess wizards, Maverick Scientists and Iconoclasts – The Programmers who Created the Software Revolution, Basic Books.

Lyons, M.J. (1981). Salvaging your software assets (tools-based maintenance). AFIPS Conference Proceedings, 50, National Computer Conference, AFIPS Press.

McCallum, J. C.  Memory Prices 1957 – 2019 available at https://jcmit.net/memoryprice.htm

Murray, Jerome and Marylin Murray. (1984). Computers in Crisis: How to Advert the Coming Worldwide Computer Systems Collapse, PBI.

Rose, Ted. (December 22, 1999). “Who Invented Y2K and why did it become so Universally popular?” The Baltimore Sun.

Santayana, George. (1905). The Life of Reason: or, The phases of human progress, Volume 1, Scribner.

Story, Jonathan and Robert J. Crawford. (2001). Y2K: The Bug that Fail to Bite, Business and Politics: 3, (3), 269-296. DOI: 10.1080/13695250120104515

 

Alejandro Ramirez (June 2020). “Of Bugs, Languages and Business Models: A History.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 9-16.


About the author: Alejandro Ramirez is an Associate Professor at the Sprott School of Business – Carleton University in Ottawa, Ontario, Canada. He has a PhD in Management – Information Systems (Concordia), an MSc. in Operations Research & Industrial Engineering (Syracuse), and a BSc. In Physics (ITESM). He has been active with the Business History Division of ASAC since 2012 and has served as Division Chair and Division Editor. He is interested in the History and the stories of Information Systems in Organizations. Currently working on a New Frontiers in Research funded project “Imagining Canada’s Digital Twin” with colleagues. (Alex.Ramirez@Carleton.ca).


 

Where Dinosaurs Roam and Programmers Play: Reflections on Infrastructure, Maintenance, and Inequality

Jeffrey R. Yost, Charles Babbage Institute, University of Minnesota

Abstract: This short essay examines two temporally separated crises (current unemployment system failures and Y2K), focusing on connections between infrastructural (largely COBOL-based) IT systems, maintenance, and societal inequality. (KW: computer history, unemployment system infrastructure, maintenance, COBOL, Y2K, inequality).

(PDF version available for download.

Grace Hopper

Rear Admiral Grace Murray Hopper was an unparalleled leader in the early software field. In addition to her pioneering work with the A-0 compiler, her FLOW-MATIC was particularly influential. More than any other language, FLOW-MATIC provided a model for the COBOL development team. (Image: United States Navy) 

In March 1959 Burroughs Corporation computer scientist Mary Hawes called for an industry and government consortium in order to develop a standard programming language for business—promoting greater portability for organizational users transitioning mainframe computers. With appearances of Autocode, FLOW-MATIC, FORTRAN, ALGOL-58, and other 1950s programming languages, she recognized the high costs of proliferation.

Jean Sammet

Jean Sammet, COBOL Co-developer. She was the first woman president of ACM, a visionary leader. Sammet had a long and distinguished career at IBM. Despite prolific achievements and stellar intellectual and managerial contributions to her company and her field, she was not named an IBM Fellow, an honor granted disproportionately to men—approximately 90% of the 275 IBM Fellows are male (Image: Charles Babbage Institute Archives)

The following month, Hawes’ call evolved into the founding Conference on Data Systems Languages (CODASYL), sponsored by the U.S. Department of Defense (DoD). CODASYL’s ongoing efforts drew inspiration from Sperry-Univac’s Grace Murray Hopper, her FLOW-MATIC, and her advocacy for languages approximating English syntax. For these important contributions, Hopper is sometimes referred to as the “Mother of COBOL,” (Common Business-Oriented Language). Despite some crediting her as its “inventor,” a CODASYL committee of six—Howard Bromberg (RCA), Howard Discount (RCA), Vernon Reeves (Sylvania), Jean Sammet (Sylvania, joined IBM in 1961), William Seldon (IBM), and Gertrude Tiernery (IBM)—developed COBOL. DoD published COBOL 60 specifications in January 1960, eight iterations followed, the most recent in 2014. It began as a standard for the DoD and became a standard of American National Standards Institute (ANSI), and International Organization of Standardization (ISO). Today, though still widespread global technologies, common descriptors for mainframes and COBOL are dinosaurs and dinosaur code.

Still Coding After All These Years

To highlight COBOL’s staying power, and perhaps glimpse into its future, in 2014 the Defense Contract Management Agency (DCMA) stated it was not looking to replace its system composed of two million lines of COBOL code (handling 330,000 contracts worth $1.2 trillion), but re-upping on COBOL. DCMA put out a statement “bragging” its new COBOL system would “probably be around for another 20 to 30 years” (Mazmanian 2014).

Back in 2004, IT research firm Gartner, Inc. had estimated there were two million programmers knowledgeable in COBOL—eight percent of all programmers globally—but that the number was decreasing at five percent a year (King, 2020). 

Today, almost half of banks in the U.S. run systems programmed in COBOL, and 95 percent of all ATM transactions rely on COBOL (Allyn, 2020)—trillions every day. Even in normal times, demand for COBOL experts exceeds supply.

Early Quincy ATM by Burroughs Corporation

Early Quincy ATM by Burroughs Corporation. COBOL code was and remains the backbone of ATM transaction processing. (Image: Charles Babbage Institute Archives).

Feeding the Beast

From the 1960s into the 1990s, many universities offered COBOL courses, as did companies and vocational schools like Control Data Institutes. Today, in an age where AI/analytics, games, robotics, cloud, and the internet of things are foremost for many computer science students, few consider learning legacy systems and legacy languages. Accordingly, COBOL courses are scarce. A Slate article quoted Prof. John Zeanchock, Robert Morris University, stating just 37 colleges and universities globally have a “mainframe course” on the curriculum. Most schools’ faculty are unable to suggest legacy specialist students/graduates when banks or local governments call. (Botella 2020). In our culture, Innovation is revered, and maintenance is not. In IT there is a myopic attention to the latest tech and a failure to recognize and value that IT maintenance requires great skill and can be innovative (new processes, new fixes, etc.). Privileging innovation over maintenance is also in part tied to gender stereotypes and discrimination as historically women have had greater opportunity in the critical areas of services, maintenance (both machines and debugging), and programming (from plug board to languages), and fewer opportunities in computer and software engineering (Yost, 2011, 2017).

 

Women students at PLATO terminals
Women students at PLATO terminals at University of Illinois in 1963. Women’s participation in CS as majors increased as the field gained traction in the 1960s and 1970s, and percentage of women peaked in the 1980s. Since then there was a sharp decline to a low plateau. This lack of gender diversity holds back CS and IT labor in all specializations, including legacy. (Image: Charles Babbage Institute Archives).

The percentage of women majors in computer science declined sharply the past quarter century—from more than 35 percent in the 1980s to 18.1 percent in 2014, only varying slightly since (nsf.gov/statistics). The reasons are varied, but gender stereotyping, a male dominant computing culture, and educational and workplace discrimination are factors (Abbate, 2012; Hicks, 2017; Misa, 2011). This has furthered labor shortages (all areas, including legacy) and held back computer science. Labor shortages can become all the more profound in times of crisis, including the current health and economic crisis.

More than a Jersey Thing

On April 6, 2020, New Jersey Governor Phil Murphy made a public plea for volunteer “Cobalt” programmers (meaning COBOL) to aid New Jersey and help with glitches to an overburdened unemployment benefits computer system more than 40 years old. New Jersey was having difficulties with timely processing of unemployment payments to the flood of new filers. The increased burden (volume and parameters) on the unemployment system was a major bottleneck, or to borrow Thomas Hughes’ term, reverse salient, to timely and accurate data processing to respond to those in need (Hughes, 1983).

This sparked an onslaught of journalist articles as well many Twitter, Facebook and other social media posts. The critiques ranged from Governor Murphy/New Jersey having an antiquated unemployment insurance computer system to calling for volunteers from a population segment that would undoubtedly be the most susceptible to COVID-19 risk—the elderly. Meanwhile, social media erupted with jokes with ageist images of elderly individuals as potential volunteers.

Control Data Institutes (CDI)
Both university courses and those at IT vocational schools like Control Data Institutes (CDI) were critical to teaching a generation of COBOL programmers in the 1960s and 1970s—many now retired. Here we see a CDI classroom in 1967. (Image: Charles Babbage Institute Archives)

Other states, including Connecticut and Kansas, had similar shortages of trained COBOL experts to confront unemployment insurance system challenges. Understandably, unemployed workers waiting for unemployment benefits are extremely frustrated and angry, expressing as much on the Kansas Department of Labor (KDoL) platform. Much is the matter with Kansas’ system, with its origins in the 1970s, and inadequate updates for flexibility and scale.  In late April, KDoL indicated a timeline where processing could occur by late May (for many that will push the wait to months). For states that have prioritized investing in updating other computer systems, but not unemployment insurance, it amounts to neglecting infrastructure that serves the most vulnerable in society.

Why do so many states have ill-equipped IT systems for unemployment benefits processing? Replacing long existing systems is complex and expensive (hundreds of millions of dollars). Change is also disruptive to existing labor, existing skill sets.  Unemployment systems serve those lacking political power; federal and state governments deprioritize them.  Further, systems (in all their technical, political, economic, and other contexts) become entrenched, or to use Hughes’ concept, gain momentum (Hughes, 1983).  Failures/pressures can redirect momentum, some states scrambled for cloud solutions once systems crashed in April—possibly the least bad option, but also suboptimal timing, new systems and processes on the fly are especially difficult. Regardless, the problem is one of infrastructure—not valuing maintenance, labor, and recipients. It is not merely COBOL versus the cloud, in fact, COBOL can and does integrate with AWS, Azure, and IBM clouds, hybrid cloud is common.

State IT Workers and Hired Guns’ Heroic Efforts

North Texas’ COBOL Cowboys staffing firm, larger IT services enterprises, and COBOL-skilled independent contractors are in great demand. The governors, state DoLs, and state CIOs are doing their best to staff up to address problems.  For the systems analysts, programmers, and other state employees and contractors the hours are long, work difficult, and efforts truly heroic. The Federal CARES Act’s unemployment benefits, PUA/PECU, allows states to extend the duration of benefits, and include those usually not eligible—the self-employed. This adds greatly to both volume and complexity. In my playful title, “play” is used for where work plays/is performed (fewer coders choosing legacy) and to highlight coders’ creativity—in the spirit of CS metaphors like “sandbox” for building (non-live) code.

Global digital divide map

 

As this United Nations’ graphic shows, the global digital divide is profound. Ramifications during a global pandemic are extreme, where digital connectivity influences opportunities to shelter, connect, and safely earn income. This map is not intended to and does not capture the deep digital divide in the U.S., along class and race lines. (Image: Dakman5, granting public domain rights, Wikicommons)

Domestic and Global Digital Divides

In the coming year, the overall percentage of Americans below the poverty line will peak higher than any time in more than 50 years— the impact for African-American, Hispanic, and Native-American populations is particularly severe. The disparity of access to health insurance, banking, loans, and information technology, as well as exposure to risk, and disparity of incidence and mortality with COVID-19, highlights extreme and growing race and class inequality in the United States.

Washington D.C.’s unemployment platform urges benefits filers to use Microsoft Explorer. Microsoft retired Explorer in January 2016, an unsupported version remains for computers, not smart phones. A Pew Research Center 2019 survey showed 54 percent of Americans under $30,000/year income have a computer, while 71 percent have a smart phone. For those making over $100,000, 94 percent have a computer/broadband at home. (Anderson and Kumar, 2019). Only 58 percent of African-Americans have a computer, versus 82 percent for whites. (Perrin and Turner, 2019) In digital, just like education, healthcare, housing, and other infrastructure, there are two Americas.

 

Control Data 3600 mainframe
Unloading a Control Data 3600 mainframe in 1964 at Tata Institute, Bombay, India. Both the IITs and Tata were fundamental IT educational and vocational infrastructure that allowed a major software and services industry to prosper in the 1990s with COBOL Y2K compliance work and much more. (Image: Charles Babbage Institute Archives)

Y2K: Why to Care

An earlier crisis largely involving COBOL, one with a long and visible runway, is both consequential context and instructive to current challenges. About a quarter century ago, governments and corporations began seriously addressing the pending Y2K crisis—caused by two digits for date often in COBOL code—to avert risks to life and the economy, to make it a nonevent.

Investments and global cooperation were key and the International Y2K Cooperation Center played a meaningful role in fostering collaboration. The shortage of programmers knowledgeable in COBOL, and the lower expense and overwhelming volume of code, led to outsourcing to an emerging Indian IT services industry. This lent momentum to this trade, and to a shifting geography in IT work that remains impactful (though corporate decision-makers are accelerating artificial intelligence applications producing further labor transformations, ones detrimental to Indian IT laborers, developments standout ABD sociologist and CBI IDF Fellow Devika Narayan is insightfully analyzing). Gartner Inc. estimated U.S. government and business expenditures were up to $225 billion, a breathtaking sum indicative of costs of putting off maintenance until a time-sensitive crisis. In passing into the new millennium with few major problems, the situation lent credence to two diverging interpretations—that heavy investment in maintenance had been necessary to avert catastrophe, or more common (and less accurate), that it was an overhyped problem leading to squandered funds in preparing, in maintenances fixes. Offshoring saved  money in the short run, but may not have in the longer run, it left a legacy of less and less current, on-shore COBOL expertise (for maintenance, updates, security, etc.), a workforce and talent helpful in global crises, particularly ones in which unfortunate (U.S.) nationalistic tendencies and policies have inhibited international cooperation.

CONNECT and Disconnects

Maintaining infrastructure is important. Anemic IT budgets have not only hurt opportunities to change and move to innovative new solutions, but also to best maintain existing systems and to better assure their ability to perform and to perform to scale in both normal times and crises. The reverse salient certainly is not always COBOL or COBOL alone. State auditors warned Florida Governor Ron DeSantis that Florida’s unemployment site, its “CONNECT” cyberinfrastructure, had more than 600 systems errors in need of fixing, but that state officials had “no process to evaluate and fix.” (Mower, 2020). It was a 2013 $77 million system, which he is quick to point out, his administration inherited. This underlines the challenge not just in Florida but many States—inadequate infrastructure is the predecessors’ fault, is not the current leaders’ problem, and fixes should lie with successors. Often the (now) multi-hundred million-dollar cost typical of major upgrades to new unemployment insurance systems (and ongoing refinement) is difficult without federal assistance. Florida’s CONNECT is a reminder of damaging disconnects, and leaders’ inattention to infrastructure for vulnerable people. The problem is also one of meager and dwindling federal support. Federal aid for state unemployment administration has been dropping for a quarter century with severe cuts in 2018 and 2019. In a survey (pre-COVID-19) more than half of the states responded their unemployment system problems were “serious” or “critical.” (Botella 2020).

Minneapolis Interstate 35W Bridge collapse
Minneapolis Interstate 35W Bridge collapse, August 2007. Physical infrastructure gets far more federal support for states than ethereal software infrastructure. Both require evaluation, audits/checks, and timely maintenance—or they break—for software in the form of crashes, delays, breaches, etc. (Image: Kevin Rofidal, United States Coast Guard. Wikicommons. Public domain USCG Image. 17 U.S.C. § 101 and § 105).

Neglected Infrastructure and Crashes

Working two tenths of a mile from the site of the 2007 Interstate 35 West Mississippi River Bridge collapse in Minneapolis, is a frequent reminder that strong, safe, and well-maintained infrastructure is essential. Twenty-eight percent of infrastructure project funding at the state level comes from federal grants (primarily for physical infrastructure). States’ invisible software infrastructure is starved, especially unemployment systems. Hopefully the COVID-19 pandemic leads not only to evaluating our medical preparedness with ICUs, PPE, and unmet needs in free enterprise insurance and healthcare, but also greater evaluation of IT infrastructures. Ideally, the developments will lead all governors with poor unemployment insurance system performance to the same conclusion as Governor Murphy about the need for post-mortems on digital infrastructure. As he put it “how the heck did we get here when we literally needed COBOL programmers”— learning from the past is important.

History Matters

One thing clear from the two COBOL crises is that history and archives matter—my thoughts here have at best just scratched the surface on fundamental IT infrastructure and contexts someone could analyze with tremendous depth using Charles Babbage Institute resources. CBI’s archival and oral history resources (most transcripts online, all free) to study the Y2K crisis and the history of CODASYL and COBOL (and many other topics and themes in the history and social study of computing) are the finest and most extensive in the world. A talented University of Pennsylvania doctoral candidate in the History and Sociology of Science, Zachary Loeb, has drawn on CBI’s International Y2K International Cooperation Center Records for his important dissertation on the cultural, political, and technical history of Y2K.

Over the years, a number of researchers have used our Conference on Data Systems Languages (CODASYL) Records. While it stands out on documenting COBOL and the group’s work with databases (what occurred in 1959 and far beyond), we have many other COBOL materials in a variety of collections. One such collection (a recent one) is our largest overall collection at more than 500 linear feet, the Jean Sammet Papers—Sammet may have been the single most important developer with COBOL. Likewise, our Frances E. (“Betty”) Holberton Papers has rich material on CODASYL and COBOL. There is also great COBOL content in our Burroughs Corporate Records, Control Data Corporation Records, Gartner Group Records, Auerbach and Associates Market and Product Reports, IBM SHARE, Inc., HOPL 1978, Charles Phillips Papers, Jerome Garfunkel Papers, Warren G. Simmons Papers, National Bureau of Standards Computer Literature, Computer Manuals, and many other collections. COBOL’s history is one of government, industry, and intermediaries’ partnerships, standards, maintenance, labor, gender, politics, culture and much more. In a technical area that always seems focused on the new, new thing, its 60-year past and its continuing presence deserve greater study.


Bibliography

Abbate, Janet. (2012). Recoding Gender: Women’s Changing Participation in Computing, MIT.

Allyn, Bobby. (2020). “COBOL Cowboys Aim to Rescue the Sluggish State Unemployment Systems." NPR, April 22, 2020.

Anderson, Monica and Madhumitha Kumar. (2020). “Digital Divide Persists…” Pew Research Center, May 7, 2020.

Botella, Ella. (2020). “Why New Jersey’s Unemployment System Uses a 60-Year-Old Programming Language.” Slate, April 9, 2020.

Charles Babbage Institute Archives (finding aids to the collections mentioned in final paragraph).

Hicks, Marie. (2017). Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, MIT Press.

Hughes, Thomas P. (1983). Networks of PowerElectrification in Western Society, 1880 to 1930, Johns Hopkins University Press.

Kennelly, Denis. (2019) “Three Reasons Companies are only 20% Into Cloud Transformation.” IBM.com, March 5, 2019.

King, Ian. (2020). “An Ancient Computer System is Slowing Giant Stimulus.” Bloomberg.com, April 13, 2020.

Mazmanian, Adam. (2014). “DoD Plans Upgrade to COBOL-based Contract System” FCW, July 7, 2014.

Misa, Thomas J., ed. (2011). Gender Codes: Why Women are Leaving Computing, Wiley.

Mower, Lawrence. (2020). “Ron DeSantis…” Tampa Bay Times, March 31, 2020.

Perrin, Andrew and Erika Turner. (2019) “Smartphones Help Blacks and Hispanics Bridge Some—But Not All—Digital Gaps with Whites,” Pew Research Center, August 20, 2019.

Yost, Jeffrey R. (2011). “Programming Enterprise: Women Entrepreneurs in Software and Computer Services,” in Misa, ed. [full cite above].

Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry, MIT Press.

Special thanks to CBI Acting Archivist Amanda Wick for discussion/insights on COBOL and our collections.

 

Jeffrey R. Yost (May 2020). “Where Dinosaurs Roam and Programmers Play: Reflections on Infrastructure, Maintenance, and Inequality.” Interfaces: Essays and Reviews on Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 1 - 8.


About the author:  Jeffrey R. Yost is CBI Director and HSTM Research Professor at the University of Minnesota. He has published six books (and dozens of articles), most recently Making IT Work: A History of the Computer Services Industry (MIT Press, 2017) and FastLane: Managing Science in the Internet World (Johns Hopkins U. Press, 2016) [co-authored with Thomas J. Misa]. He is a past EIC of IEEE Annals of the History of Computing, and current Series Co-Editor [with Gerard Alberts] of Springer’s History of Computing Book Series.  He has been a principal investigator of a half dozen federally sponsored projects (NSF and DOE) on computing/software history totaling more than $2 million. He is Co-Editor [with Amanda Wick] of Interfaces: Essays and Reviews in Computing & Culture.