People’s Computing: A Gaming Inroad

As I embarked on some of my initial research, I found an interesting starting point around where computers and education intersected. This intersection provided a very fertile ground for the development of computer games.

A very key moment in the microelectronic revolution, which eventually led to computing accessible to many people, was the launch of the Sputnik satellite on 4 October 1957 by the Soviet Union. In the United States, the launch of Sputnik provided the impetus to reconsider American education as a whole. It was felt a “space race” was coming — in fact, a technology race — and thus American education had to prepare its youth for that. Better schooling was required, particularly around math and science.

Computing in Education

In September 1958, the United States Congress passed the National Defense Education Act. This provided funds to improve teaching, with an emphasis on teaching science and mathematics. The federal government thus made money available for innovative approaches to education, like using technology in the classroom. Money supplied by the government, as well as private institutions like the Carnegie and Ford Foundations, made it possible during the early 1960s for schools to experiment with experimental educational methods.

This impetus for adding technology into schools had a direct impact on the history of computing that I’m most interested in from a gaming perspective. There’s a lot of history to this that I won’t cover in this post, although I’ll likely have reason to elaborate on much of this in subsequent posts.

A Mythology Formed

While investigating that history, you also encounter a persistent mythology in computing. That mythology is that computing went directly from a type of priesthood or elite model of mainframe computing in the 1960s to the liberation from this by the so-called “homebrew hobbyists.” These were non-professionals who “worked out of their garage” — hence the “homebrew” — and ultimately gave us personal computers, one entry point being the Altair in 1975 to the 1977 “trinity” of the Commodore PET, the Apple II and the TRS-80, eventually leading into the early 1980s with the IBM PC and its various clones. Those developments ultimately led to diversification in the 1990s as the “World Wide Web” became the face of the existing Internet.

So the general form of the mythology is that we had corporate/institutional computing and we had personal computing. Yet in between those two we had what we might want to call “people’s computing.” This is a part of the history — and I would argue a very large part of it — that often gets entirely dismissed, assuming it’s even known about at all.

There was a decade, from 1965 to 1975, where students, educators, and enthusiasts created personal and social computing before personal computers or the broadly public and accessible Internet became available. At that time, the newly emerging personal computing access moved in lockstep with network access via time-shared systems.

This technology context combined with the education context I mentioned previously meant that primary (K-12) schools and high schools, as well as colleges and universities, became sites of technological innovation during the 1960s and 1970s. Given this context, students and educators were the ones who built and used academic computing networks, which were, at this time, facilitated by a new type of computing known as time-sharing.

Time-sharing was a form of networked computing that relied on computing terminals connected to some central computer via telephone lines. Those terminals were located in such social settings as middle school classrooms, college dorm rooms, and university computing labs. Communal institutions, such as schools, universities, state governments, and the National Science Foundation, enabled access to and participation with these systems.

Shared Working and Shared Work

Collective access to a social, communal resource meant the possibility of storage on a central computer that all the terminals connected into. This meant that users could share useful and enjoyable programs across the network. By design, time-sharing networks accommodated multiple users. Multiple users meant more possibilities for cooperation, community and communication. This in turn allowed for a great deal of experimentation, collaboration, innovation, and inspiration.

Advocates of 1960s and 1970s time-sharing networks often shared the belief that computing, information, and knowledge were becoming increasingly crucial to American economic and social success. That viewpoint was related to another, more all-encompassing viewpoint which was that computing would be essential to what many perceived as an emerging “knowledge society.”

Such a society would certainly be predicated on the sharing of knowledge and thus computing would have to be an essential part not just of storing that knowledge but also making sure it was appropriately democratized. Fostering those viewpoints, and those practices, in an educational context was certainly another way that the technology and education contexts were aligning.

In fact, it was in this historical context that people began discussing the possibility of a national computing network. This would be a network that was comparable to the national telephone network or the national electrical grid.

During the 1960s, those focused on academics and those focused on business both grew increasingly interested in, and evangelical about, a national computing utility, or perhaps even multiple computing utilities. The idea here was that computing services would be delivered across the United States over time-sharing networks. Entire businesses were launched to realize this particular vision. That vision was a world in which all Americans benefited from computing access in their homes and workplaces, just as Americans benefited from the comparable national utilities of water, electricity, and telephone.

Technology had to enable all of these grand visions, of course, and, for a time, there was a symbiotic relationship with the minicomputer marketplace, which provided a lower cost alternative to very expensive and very large mainframes. Minicomputer manufacturers, like the Digital Equipment Corporation and Hewlett Packard, monetized their support of educational materials to sell their machines, further aligning technological with educational interests.

A Computing Ecosystem Formed

What I hope you can see is that the histories of educational institutions and the technology companies are woven together but still distinct. They certainly had areas where their interests aligned but also, of course, had very specific interests of their own that they worked toward. It was the educational aspects, however, that would give rise to the idea of people’s computing, with the technology companies figuring out not only how to be enablers of that but how to make a whole lot of money doing so.

The early, more localized networks collectively embodied the desire for computer resource sharing. In an education context, this immediately situated itself around a community of interested individuals joined by computing networks. The desire was for a form of communal computing which could be extended beyond the education context. That “beyond the education context” impetus is what propelled the push for a national computing utility and the promise of a nation of computing citizens mentioned above.

So let’s consider the broad sweep of the history being discussed here as it developed. I will be mentioning places and names here that you may have no context for. All of these will come up in subsequent posts.

We ended up with educational computing enthusiasts like Noble Gividen, Bob Albrecht and David Ahl. Various computing projects within education started, such as Project SOLO, Project LOCAL and the Huntington Project. There were also education initiatives such as BOCES and TIES and MECC. There were people who began to build hobbies around the new technology and, eventually, there were people who began to build careers around the new technology.

All of this forged a crucial link between the computer and its larger social and economic environment. This, in turn, led to different technical communities and distinctive subcultures and thus varying ecosystems. And that, in turn, led to discussions about the relationship between science and craft in engineering practice, both of hardware and software. Those communities, subcultures and discussions eventually led to the hardware being a consumer technology and to software being a commercial industry.

We thus eventually saw the rise of an information economy serving as the backbone of an information society. And in this society, games and gaming has proved to be a staggering multi-billion dollar industry, one that, at the time of writing this, outdoes all other forms of entertainment.

So here’s where my initial historical forays led me: in the early 1960s, computers were, for the most part, remote, inaccessible, and unfamiliar to pretty much everyone. Speaking just to the United States, there were approximately six thousand computer installations and those tended to cluster in military, business, and research-focused university settings. Computing was focused on the academic, industrial, and military. Individual access to computing was extremely rare.

As per the broad history I recounted above, computers would eventually spread from military and university installations through factories and offices and, eventually, into the home. But that took a little bit of time. And in between that time — between, that is, institutional computing and personal computing — there is a fascinating emergence of people’s computing. As I’ll endeavor to show in future posts, a whole lot of that people’s computing was driven by a focus on simulation and games.

×

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.