talking to machines: the political ascendance of the...
TRANSCRIPT
Talking to Machines: The Political Ascendance of the English Language in
Computer Programming
by
Ejilayomi Mimiko
B.A. (History), Simon Fraser University, 2018
Extended Essay Submitted in Partial Fulfillment of the
Requirements for the Degree of
Master of Arts
in the
School of Communication (Dual Degree Program in Global Communication)
Faculty of Communication, Art and Technology
© Ejilayomi Mimiko 2019
SIMON FRASER UNIVERSITY
Summer 2019
Copyright in this work rests with the author. Please ensure that any reproduction or re-use is done in accordance with the relevant national copyright legislation.
ii
Approval
Name: Ejilayomi Mimiko
Degree: Master of Arts
Title: Talking to Machines: The Political Ascendance of the English Language in Computer Programming
Supervisory Committee: Program Director
Katherine Reilly, Professor
Yuezhi Zhao Senior Supervisor Professor
Katherine Reilly Program Director Associate Professor
Date Approved: 29th August, 2019.
iii
Abstract
This essay explores possible reasons why English has become the "default" natural
language from which programming commands are borrowed. Programming languages
like C, C++, Java and Python use English keywords exclusively. The essay explores the
social factors that underlie this phenomenon and how traditional power hierarchies are
perpetuated. The essay is a critical response to the emancipatory rhetoric that ushered
in the creation and popularization of the digital computer. It uses the story of ALGOL
project to illustrate how technical goals are shaped by social factors which inevitably
reify inequality into technological artefacts. ALGOL, an attempt to create a standardized
machine independent universal programming language, while answering a significant
amount of technical questions, did not bridge the natural language gap. By way of
historical exploration, I argue this result is an expression of American globalization of the
computing industry.
Keywords: Computer Programming; English Language; Linguistic Imperialism; Media
Archaeology; Communication Studies
iv
Dedication
For the person I started writing my first essays to, the woman who stopped everything to
make me whole when I was incomplete and the man whose path I failed to neglect.
Emiope, Mǎàmi and Daddy
v
Table of Contents
Approval.......................................................................................................................... ii Abstract...........................................................................................................................iii Dedication ...................................................................................................................... iv Table of Contents ............................................................................................................ v List of Figures ................................................................................................................ vi List of Acronyms ............................................................................................................ vii
Chapter 1. Introduction .............................................................................................. 1 1.1. Programming and Binary representation .............................................................. 2 1.2. English and Computer Programming .................................................................... 4
Chapter 2. Computing Technology and Society ...................................................... 7 2.1. Philosophies of Technology .................................................................................. 7 2.2. Social Dimensions of Computing Media ............................................................... 9
2.2.1. Media archaeology of Computational Media .................................................. 9 2.2.2. Computers as Organisms ............................................................................ 10 2.2.3. Race and Computer Architecture................................................................. 11 2.2.4. Noise and Computation ............................................................................... 12 2.2.5. Reactionary Standardization........................................................................ 14
2.3. Effects of English Centered Programming .......................................................... 14
Chapter 3. Linguistic Imperialism ........................................................................... 17 3.1. European Linguistic Expansion .......................................................................... 18 3.2. The Spread of English ........................................................................................ 18 3.3. United States takes the Linguistic Helm.............................................................. 19 3.4. English and European Languages ...................................................................... 20 3.5. Conclusion ......................................................................................................... 21
Chapter 4. The Development of the American Computing Industry ..................... 23 4.1. The American Iron Triangle ................................................................................ 24 4.2. Coming to Europe .............................................................................................. 26 4.3. United States of European Countries ................................................................. 28 4.4. Programming Language and Natural Language ................................................. 29
4.4.1. Development of High-Level Programming Languages ................................. 30 4.4.2. Programming as Language ......................................................................... 33
4.5. Linguistic choices: ALGOL, SIMULA, C, C++ ..................................................... 35
Chapter 5. Conclusion ............................................................................................. 39 5.1. Parallels of Emancipatory Rhetoric ..................................................................... 40 5.2. Points of Control and Access .............................................................................. 41 5.3. English and Technical Democracy...................................................................... 42
References .................................................................................................................. 43
vi
List of Figures
Figure 1.1: Python Keywords .......................................................................................... 5 Figure 1.2: C keywords ................................................................................................... 5
vii
List of Acronyms
ACM Association of Computing Machinery
COBOL Common Business Language
DARPA Defense Advanced Research Projects Agency
EDVAC Electronic Discrete Variable Automatic Computer
ENIAC Electronic Numerical Integrator and Computer
FORTRAN Formula Translation
GAMM German Society for Applied Mathematics and Mechanics
GE General Electric
IBM International Business Machine
ICC International Computing Center
IFIP International Federation for Information Processing
IMF International Monetary Fund
MIT Massachusetts Institute of Technology
NATO North Atlantic Treaty Organization
NDRC National Defense Research Committee
NSA National Security Agency
ONR Office of Naval Research
OSRD Office of Scientific Research and Development
R&D Research and Development
RCA Radio Corporation of America
SQL Structured Query Language
UK United Kingdom
UN United Nations
UNESCO United Nations Educational, Scientific and Cultural Organization
UNIVAC Universal Automatic Computer
US United States
USAID United States Agency for International Development
1
Chapter 1. Introduction
“If we want America to stay on the cutting edge, we need young Americans like you to master the tools and technology that will change the way we do just about everything.”
- Barack Obama 1
The computer now plays an undeniably fundamental role in the way we live in the
21st century. This has been documented, researched and over-debated. More than an
instrument to make lives easier, It has become the source of meaning and, at other
times, the source of unmitigated disaster (“Google,” n.d.). This document is a product of
work done entirely through the assistance of various computers. At the same time, there
is an ever-increasing number of devices and systems that have become computers. The
definitions of what computers are is becoming more inclusive. This is due to the ever-
increasing ease with which computing machinery and memory can be cramped into
small tiny chips (Ensmenger, 2010, p. 4; O’Regan, 2012, pp. 29–30). Entire cities, cars,
pens, fridges, microwaves, traffic lights, have been embedded with tiny computational
devices that have made them very responsive to human interaction. The development of
network computing, best exemplified by the internet, ushered in with it, the information
age. Theoretically, everyone can be connected to this great network. Making access to
information appear more democratized and decentralized. This allowed the compression
of time (and partly space) at such a high magnitude that relative to the new millennium,
human life was drudging along at speeds analogous to that of an Aesopian tortoise.
Summarily, the ubiquity of the computer in our daily lives cannot be understated. It is
with these in mind that this project addresses a fundamental question – How have we
been communicating with the device that has become more than just an intricate part of
our lives?
There are various avenues through which we communicate with the computer.
These interfaces make the computer easier to understand and control. For example, the
1 The President of the United States’ message for the inauguration of Computer Science Education Week in 2013 for the non-profit code.org (Finley, 2013)
2
touchscreen of the contemporary smartphone collects inputs from the fingers, processes
them in the smartphone’s processor and presents the user with outputs that have been
programmed in response to these inputs. The computer is a programmable device. It
carries out actions based on instructions that have been loaded into it. Everything done
with the computer, from talking to the latest quasi-“sentient” devices and watching
movies on a computer screen to using a global positioning satellite for navigation in an
unfamiliar city, is collected, moderated and delivered through the programming of a
multitude of computers and computational devices. There-in lies the root of our problem.
1.1. Programming and Binary representation
Computers do not understand any inputs in the raw format that they encounter
them. A smartphone software that responds to voice commands, does not understand
the voice commands semantically.2 Neither does a word processing software
understand the concept of typing and producing documents. The only way for the
computer to produce any output is dependent on the representation of the input it gets in
binary digits i.e. 1’s and 0’s. This is because at its core the computer is a digital device.
Digital devices work by converting analog (usually real-world mechanical) input through
the help of sensors and digitizers that summarizes the input to 1 or 0 depending on the
threshold. A digital computer takes this process of representation and uses it to store
and carry out instructions.
Hence, all our avenues of communicating with the computer are inputted to the
computer through binary representation. While binary representation and coding of daily
life is the only ideal for a digital computer, human beings find it difficult to communicating
with the device directly in its language. The way around this difficulty led to the rise of
programming and the development of high-level programming languages. A
programming language is a set of commands that, if used properly, can be interpreted
and converted to the binary representations that a computer can understand and use to
carry out specific tasks. There are currently more than 8000 serious programming
2 This brings to fore John Searle’s Chinese room experiment and the problems that have risen from the intertwined definitions of information and data (Cox, 2013, pp. 31–32). This experiment gives a clear indication between information, data and data processing and the clear lack of intelligence of the computer.
3
languages in the world (Bergin, 2007) 3 perhaps orders of magnitude more, as it is
extremely easy to create a programming language if one had the technical know-how
desired to do so (J. Sammet, 1972; J. E. Sammet, 1969, pp. 49–50). Hence the problem
is not about the existence of alternative programming languages. This questions the
neutrality of English based programming languages
For the purposes of this essay, we would be focused on the most popular
programming languages in academia and the computing industry. Industry and
academia are very well intertwined in the development of programming. The most
popular languages are, depending on various rankings, Python, Java, C#, C/C++, Ruby,
Perl, Javascript, Lisp, SQL, COBOL etc. They have come to dominate the world of
programming. These programming languages solve a myriad of technical problems and
their efficiency and power as computational tools has been difficult to falter.(Ben Arfa
Rabai, Cohen, & Mili, 2015; Delorey, Knutson, & Chun, 2007; Karus & Gall, 20110210;
Prechelt, 2000)
The Python programming language continues on its aim to bridge the technical
difficulty of access to programming by making the process “easy to learn, read, and use,
yet powerful enough to illustrate essential aspects of programming languages and
software engineering” (Rossum, 1999). Java allows for the writing of code that can be
run on a virtual machine so that one does not need to worry about writing a program to
the specifities of every computer hardware in existence as long as that hardware has its
virtual machine installed (O’Regan, 2012, pp. 133–135). C, originally created to help
write what has become the most popular contemporary operating system - UNIX, it has
made its name through its clean design choices and portability (Ritchie, 1993). C++, an
extension of the work done on C that introduced object oriented programming to the
language (Stroustrup, 1993).4
The job of the programmer is to use any of these languages to describe and
represent a problem in such a way that a computer can understand how to process the
tasks required. This is not an easy task, selecting the right programming language to
solve a specific problem is analogous to choosing the correct vehicle to get between two
3 There are also esoteric programming languages, designed to be complicated or obfuscating 4 Object oriented programming allows for the description of programs that mirror real-word understandings of hierarchical relationships
4
points. A plane is fast, but on the highway its dangerous, a car on the road is okay, but
on the ocean its virtually useless. Using the wrong programming language can make a
problem more challenging than it has to be. It can also be a blessing when the language
feels like it was designed specifically to solve whatever problem it is applied to. As a
result, programmers often have to learn more than one programming language to be
successful at their jobs. This is also true for hobbyists.
The information age has come with the rise of new hubs of financial power.
These powers are often portrayed in popular media in relation to their skill in
programming the computer. So successful has this been that there has been several
literature and governmental campaigns that address the importance of programming as
an essential tool in the life of the modern individual. This includes works on an early
generations of “disruptors”, - the Michael Dells, Paul Allens, Steve Jobs and Bill Gates.
Then came the second generation of the social network economy, the Mark
Zuckerbergs, Sergey Brins and Larry Page. We are currently in the era of the platform
and “sharing” economy with the rise of Uber(s), AirBnB, and the likes (Gallagher, 2018;
Keating, 2012; Nick Srnicek, 2017; Stone, 2017).
This concentration of wealth and technology in the North has been soundly
critiqued in various other works (Chinweizu, 1975; Mimiko, 2012; Spivak, n.d.). This
essay exposes a gaping hole in relation to the programmer with the magical wand of
disruption. I believe it continues in line with these critiques by examining how these
problems are manifest in the origins of computer programming as a field.
1.2. English and Computer Programming
The major computer programming languages (as discussed above), industrial
and pedagogical, borrow most of their keywords from the English language. Keywords
are reserved words in a programming language. These words have direct relationship
with the interpretation of a written program and cannot be used without proper context.
They have a direct mapping to the implementation of a program at the level of the
hardware which must not be compromised at the level of the written software. The
question that this paper asks is why and how did the dominance of English come about.
Since the digital computer works internally on a binary based system (a-lingual), how
has it come to be that the programming languages which are to be mapped onto the
5
computer have largely been run under the command of English words. There are various
problems that arise due to this, which includes but is not limited to pedagogical access
and distribution of technique.5
Figure 1.1: Python Keywords
Figure 1.2: C keywords
The way technology shapes our understanding of the world has been rigorously
studied and philosophized. The next chapter begins by examining philosophies of
technology and contemporary critical scholastic understandings of technology, especially
through the diverse field of media archaeology. Through media archaeology, It examines
the ways the immaterial protocols of social spaces are mapped unto the materials that
have depoliticized their use. This goes into the motivation to study this phenomenon and
understand how some of its implications shape our world. I take a cue from the budding
field of Software Studies which aims to challenge the “thinking about software itself”
which has “remained largely technical for much of its history” by “[using] and
[developing] cultural, theoretical, and practice oriented approaches to make critical,
historical, and experimental accounts of (and interventions via) the objects and
processes of software” (foreward Cox, 2013). I explore literature that show that a major
disservice has been done in the computing world with the maintenance of English as the
default pre-programming language of the world. To tell the story of modern programming
languages, I engage with the history of ALGOL, an international effort to create a
universal programming language which was extremely influential to the field of
computing and the practice of programming. I believe it is important to have linguistic
5 The preponderance of English in standard and custom libraries is beyond this essay.
6
localization, no matter how banal (Billig, 2010), to reflect people’s understanding of their
environment and social space.
7
Chapter 2. Computing Technology and Society
“Trivia cannot be identified easily, special cases overwhelm the search for general patterns, custom and habit move performance into the realm of objective concept, experience warps both intuition and reason, fear of instability burdens insight with caution.”
- Alan J. Perlis6
Scholarly endeavor has been positively bolstered by study of the mundane.
Through study of that which has become mundane and the endlessly chaotic processes
of its normalization, one begins to unravel the embodiment of social factors in the
minutest details of contemporary existence and its symbolic expressions. This is not
different for the field of computing. As the exploration of other works in the field is going
to show, the computer and all its various manifestations carries with it biases that have
been shaped by social factors, which it also actively reforms. How should we understand
technology and how have scholars continued to understand technology? I begin by
examining the framework adopted by David Barney’s Network Society, in which he
studies the development of the information age across several material and social
registers. Barney outlines four general approaches to the study of technology and
society. These are the Substantivist, Social Constructivist, Instrumentalist and
Composite schools of thought on the nature of technology and society.
2.1. Philosophies of Technology
The Substantivists like Heidegger, Webber and Ellul, argued that there was an
inherent deterministic essence that characterizes any technology that follows of a
process of “instrumental rationality, standardization and homogenization…”(Barney,
2004, p. 39) which reifies the essence of the technology, almost always regardless of the
social environment in which it was created or utilized.(Barney, 2004, pp. 37–40) Social
Constructivists like Kuhn, Feverabend and Sandra Harding subjectivize the realm of the
technique to the social. For them the entire process that characterizes the creation, the
adoption, the use, the effects of a technique are entirely social. To essentialize these
factors to an irreducible essence belies the intricate social mechanisms responsible for
6 Perlis describes the context in which the ALGOL programming language was created (Alan J. Perlis, 1978)
8
its existence. The Instrumentalist view is one that’s developed in relation to the problem
being solved. It espouses that the solution to a technical puzzle is essentially neutral. In
this view the solution that’s developed and applied is objectively the best suited to solve
the problem. Finally the Composite view of technology and society uses all the
preceding views outlined to analyze techniques and how they affect or are affected by
society. (Barney, 2004, pp. 39–42)
Langdon Winner argues that technological artefacts have politics, and inherent
ones at that. While they adopt the technical requirements required for the maintenance
of the social order to which they came from, Winner carries out the philosophical
exercise of looking at the inherent ways some technology “embody specific forms of
power and authority.”(1980, p. 121) Winner outlines that there are two ends of a socio-
political spectrum that a technology can embody – democratic or authoritarian. The
authoritarian is centered around power and power structures while the democratic
artefact is centered around empowering individuals. Winner toys with the spectrum of
resolute social determination of technology on one hand and naive technological
determinism on the other. He does not doubt the importance of the social determination
theory; for Winner, this is a given. What he points out is that despite the social
determinants, some technologies and techniques have inherent deterministic outcomes.
He suggests that “we pay attention to the characteristics of technical objects and the
meaning of those characteristics.” That is, every technique is “a political [phenomenon]
in its own right”.(Winner, 1980, p. 123) There are “instrumentalist” techniques, that come
out of genuine resolves to solve some technical issues in a society but there are
techniques that are “strongly compatible with, particular kinds of political
relationships”(Winner, 1980, p. 123)
The computer started out as an undemocratic machine confined to centers of
power. Some countries tried to keep it that way (Peters, 2016). James Scott has written
extensively on the nature of technologies that are developed in pre-centralized
agricultural economies and how they tend to favour the diffusion of power through the
technologies implemented, which argues that some techniques are inherently
democratic (J. C. Scott, 2009). This is the line of thought that this essay aligns the most
strongly with. That despite the illusion Moore’s law creates by individualizing access to
computation, our reality has only become a manifestation of gargantuan structural
powers implemented at the level of the individual.
9
This is in essence a deterministic substantivist view. However, because it does
not ignore the social factors that have led to the development of technique, it also falls in
the purview of Composite understanding of technique. The next few paragraphs
examine works that explore the effects of media technology through the method of
media archaeology. Computer Programming works as an interface between the human
and the computing device. By examining how people have studied other interfaces,
some patterns in the history of media become apparent. While there are social factors
affecting the creation, adoption and utilization of an artefact, there are essential social
proclivities that perpetuate themselves through the way technology manifests.
2.2. Social Dimensions of Computing Media
Media that is defined as “new” must justify its novelty. This is done through
extensive publicity that paints it in progressive light. This was not different for the
development of programming as a new form of media. As Winner posits, “[s]carcely a
new invention comes along that someone does not proclaim it the salvation of a free
society” (1980, p. 122). This section is an examination of how analysis of media in a
critical historical perspective makes evident the contradictions of understanding media
technology as a progressive, always improving, positivist, liberating field. I show how
media archaeology creates a new understanding of technology in relation to the social
environment of its creation and utilization. This portrays the study of technological
mediation as a study of how power maintains the old ways of doing and knowing through
novel receptacles . Concisely, this faults the foundational principles of historicizing media
as an avenue for progressive, democratic, empathetic and instrumentalist principles
which characterizes the history of computing.
2.2.1. Media archaeology of Computational Media
Media archaeology provides an opportunity to study the material and the
imagination of media’s influence of society outside of a progressivist outlook. This
progressive theory of media’s influence on society is a direct product of the tech-
industry’s understanding of its role in society. This is especially true when Silicon
Valley’s influence on the understanding of its role in society is examined. In his analysis
of Silicon Valley cultural evangelism, Turner shows how industry figures in Media Lab
10
and the Whole Earth Network painted a beautiful picture of the future as the product of
their work. This was done under the guise and rhetoric of the counterculture movement.
The industry was promoted as having a progressively advancing trajectory, a determinist
or substantivist argument in the service of progress where life gets better with innovative
technological inventions. These innovations were projected to influence all spheres of
human existence and move humanity towards a new world. In this new world, power and
resources are distributed and transferred equitably. As Ronald Reagan, a product of this
rhetoric, proposed, “…in the new economy, human invention increasingly makes
physical resource obsolete… breaking through the material conditions of existence to a
world where man creates his own destiny”(quoted in Turner, 2006, p. 175). The
undercurrent of Reagan’s claim proposed that failure to make any headway in society
was a result of some backwardness or inability to utilize new tools. As Chun observes
there are political and economic underpinnings to the concept of newness and media.
She argues newness is a political definition to make deregulation possible.(Chun, 2006,
p. 3) Hence newness is a product of the neoliberal economics of the era. Turner’s
exploration reveals the contradictions between the rhetoric of these new age scientists
and their revenue generation strategies. While its innovators, Brand and Kelley, were a
product of the counterculture movements that preceded the 80s, they were very much
embedded into the corpo-militarial structures of power. Undemocratic structures that are
products of exploitative relations of power across the United states and the world ((2006,
pp. 182–188)
2.2.2. Computers as Organisms
It is by looking at both the “material and metaphorical,”(Turner, 2006, p. 182) in
contrast to rhetoric, that it is possible to observe patterns in the ways that technology
influences popular (even scientific) understanding of media. A pattern that comes to the
fore as a result of this process is the fetishization of biological understandings (of the
world). This fetishization continues to materialize in the study of the computer’s role in
human social organization. Parikka’s analysis of virology and microbiology’s importance
in the creation of modern computer architecture foregrounds this fetishization. It is not
surprising to see computer scientist, decades after von Neumann, observe biological
patterns in the organization of the networking technologies they applied their simulation
algorithms and computing analytics to.(Parikka, 2016, pp. 279–283; Turner, 2006, pp.
11
198–200) Hence when the “…tools for examining and modeling the world,… and… the
algorithms with which they organized information mimicked the algorithmic patterning of
life itself by means of biological ‘technologies’ such as DNA,”(Turner, 2006, p. 198) they
only followed in the footsteps of predecessors like von Neumann whose interest in
creating a living machine influenced his development of modern computer architecture.
This study of media through a historical lens allows for an understanding of phenomena
that characterize its existence. This is particularly obvious in the study of computer
viruses carried out by Parikka.
Modern computing systems were designed to imitate the characteristics of
autonomous life. Their architecture was influenced by the rising interest of its designers
in bacteriology and virology. While it was meant to solve computational problems, the
puzzles in its origination aimed to recreate life. To address the creation of an organism
that could exist, act and function outside the confines of a human operator and learn
from its immediate environment. What Parikka does is situate the material in the
zeitgeist of its creation. Without this, media technology is seen as existing in a different
dimension from the period in which it is created. This creates a dilemma, though
inconspicuous, where the tools for social interaction are not a product of their socio-
economic and political environment; but, a product of an idealized world – unrestrained
by the realities of society. It is this dilemma that motivates McPherson’s analysis of the
most widely used operating system in the world, UNIX.
2.2.3. Race and Computer Architecture
According to McPherson, analysis of race in relation to media has often tackled
representation and access. This has overshadowed an examination of the impact of
racialization in American discourse on the structural origins of the modern operating
system. She argues that the impact of social cleavages in the world at the time the
operating system was being designed is fundamental to understanding the technical
choices made by its designers.(McPherson, 2012, p. 23) Specifically she asks “might we
argue that the very structure of digital computation develop to at least in part to cordon
off race and to contain it?”(McPherson, 2012, p. 24) This question tackles technological
reproductions, from the reality of its origination. She attempts the difficult task of
examining the foundational relations of power in American society and how they affected
the development of the UNIX operating system. Particularly “the way in which emerging
12
logics of the lenticular and of the covert racism of colorblindness get ported into our
computational systems.”(McPherson, 2012, p. 25) This task is particularly onerous. It
requires a deep knowledge of the UNIX environment and the innovations that set it apart
from its predecessors and competitors. Drawing a straight line that establishes the direct
causal link between the two seemingly parallel fields of computing architecture and race
is somewhat difficult. Observing the characteristics of the time period where they
manifest helps to observe the ideological links in UNIX software architecture. This forms
the fiber of McPherson’s argument. For her, Unix is a product of post-Fordist
rationalization of the 1960s which is characterized by “intense modularity and
information hiding” (2012, p. 29). The user has no access to the Kernel which performs
the bulk of important functions on the system. The Shell becomes the go-between for the
user and the system (McPherson, 2012, p. 29). Hence the Operating System is hidden
from the user. This is analogous to how racism and segregation is ported and
transformed in the American landscape. As McPherson argues, “the emerging neoliberal
state [began] to adopt ‘the rule of modularity’”. This is observed in post-segregation
Detroit, which becomes more segregated by the 1980s. Hence segregation has been
carried out by a neoliberal kernel, hidden from the user under the shell of financial power
and social capital. McPherson reminds us that “computers are themselves encoders of
culture”(2012, p. 36) and their structures do not exist outside the reality of the historical,
social and cultural context in which they have been designed. Hence, in media, there is
a continuity of old in the “new”. This is clearly done Sterne’s analysis of the mp3 format.
His proposal to create the field of format theory, gives a powerful representation of this.
2.2.4. Noise and Computation
Sterne shows that mp3 does more than just compress, but “carries within it
practical and philosophical understandings of what it means to communicate.” The
history of the MP3 format interweaves with the history of telephonic transmission of
sound. Like UNIX, it’s structured to prosper in a competitive market but carries with it
ideological undercurrents of the moment of its actualization. This it does by cutting out
noise which the imagined listener cannot hear (2013, pp. 2–4). Hiding the process, not
unlike UNIX, from the user. In advocating for the field of Format Theory, Sterne outlines
that aside from denoting the decisions that “affect the look, feel, experience, and
workings of a medium...,”(2013, p. 6) a format “...names a set of rules according to which
13
a technology can operate.”(2013, p. 7) Technical as well as aesthetic choices have to be
made and these come directly under the influence of the social and political zeitgeist of
its contemporary time period. These developments are not new. They never were. They
are a continuation of social conventions that reform themselves into new modes of
understanding. Hence, the study of formats “highlights smaller registers like software,
operating standards, and codes,... larger registers like infrastructures, international
corporate consortia, and whole technical systems”(Sterne, 2013, p. 11) and how they
continue to channel their power and authority through mediums that have become
ubiquitous. For example, the CD’s length may have come down to the ideals of
executives at Sony who chose the 74-minute length to fit their cultural understanding of
meaningful music. Also noted is its 128 Kbps sample rate designed to accommodate
telephone technology of the time it was developed (Sterne, 2013, pp. 12–15). Hence the
CD did not necessarily signal a distinct break in the ways of recording data. It was
tailored to serve similar functions with the cassette tape. Inheriting conventional signs
from predecessors. Hence its “specification operate as a code – whether in software,
policy, or instructions for manufacture and use – that conditions the experience of a
medium and its processing protocols”(Sterne, 2013, p. 8) which make its positivist
determinism something of misnomer because it is an actualization of social processes
that have “been around for over half a century.”(Sterne, 2013, p. 7) As Sterne notes
“mp3 works so well, because it refers directly – in its very conception – to the sounds
and practices of other conventions of sound-reproduction practice”(2013, p. 26) An
observation of these continuities leads Manovich to observe a similar pattern in his
analysis of the Language of Cultural Interfaces. He notes –
“given that computer media is simply a set of characters and numbers stored in a computer, there are numerous ways in which it could be presented to a user. Yet, as is the case with all cultural languages, only a few of these possibilities actually appear viable at any given historical moment”(Manovich, 2001, p. 82).
Thus, an examination of how these mediums interact and how humans utilize them
brings to light the power dynamics that underpin the dematerialized physicality of the
format and its imagined ideal. The definitions of these formats and the protocols that
determine how devices communicate and how we communicate with them are set in
quasi-democratic ways. They are produced under a chimera of equity in which ease is
equated with democracy.
14
2.2.5. Reactionary Standardization
Computing protocol setting, in relation to the internet, is ideally open to everyone,
but the knowledge required to make any meaningful actionable contribution is quite
steep. Technological standards are set by an exclusive club of professors and experts
who have ties to industry. Their social connections are not representative of diversity in
the United States. The United States is also not representative the world’s diversity. But
as Turner observes of the industrial giants of the Whole Earth Network, "they… largely
turned away from those whose bodies, work styles, and incomes differed from their
own”(2006, p. 176). Galloway also notes that three of the internet’s protocol pioneers
had come from the same high school in San Fernando Valley. One of these is Vint Cerf
who made the convenient observation that users don’t want to see the protocols behind
the technology they make use of (Galloway, 2006, p. 187). Though the center-periphery
model is shunned by computing scientists, they create a protocol for every medium
that’s available. Some protocols are distributed, some are hierarchical. The internet,
despite its anti-federal formation is universalist. It allows for decisions to be made at the
local level which is where the chimera of equity becomes actionable. But, local decision
making must conform to the universal standards. As Galloway summarizes
“standardization is the politically reactionary tactic that enables openness”(2006, p. 196).
Hence, for the internet to be “open” it has to close out noise.(Galloway, 2006, pp. 193–
196) The definition of noise is in many ways outside the jurisdiction of the public.
The study of interfaces and how we communicate with and use the computer is
inevitably a study of power, the ways power reforms and ports itself through materials
that were once centralized and now appear distributed. These and other unmentioned
works are what inspired this study.
2.3. Effects of English Centered Programming
The use of English in programming, industrially and academically, imposes a
difficult hill to climb for learners of programming. Programmers must learn programming-
ready-English. If the fallacious argument that computers are meant to liberate is taken at
face value, the English language should not be a necessary condition to make that
argument a reality.
15
Qian and Lehman in an attempt to find the relationship between gender, prior
education and programming learning success, stumbled upon the fact that a working
knowledge of English is a major factor in predicting the ability of a student to pick up
programming skill effectively (Qian & Lehman, 2016). Mhashi and Alakeel found based
on a series of tests given to students of computer programming at Tabuk University
Saudi Arabia that students that were weak in English were weak in programming
(Mhashi, 2013). Banerjee et al found that it was practically impossible to introduce adults
to the concept of programming if they did not have any working knowledge of English.
They utilized Family Creative Learning (FCL) which involved empowering older family
members in the process of teaching school children how to code. This was very difficult
because older members of disenfranchised communities often had very little working
knowledge of English (Banerjee et al., 2018). Dasgupta and Hill studied the progress of
learners of Scratch in English and localized languages. They found that those who learnt
in their localized languages were quicker to deploy novel applications of the
programming language while learning. Quoting UNESCO’s 1953 directive to ensure the
localization of knowledge through the use of people’s mother tongues, this study was
done in line with others that call for the localization of interfaces (Dasgupta & Hill, 2017).
Guo looked at people trying to learn how to program from different parts of the world
outside of the anglosphere and their reaction to the amount of English required to learn
programming. There were a lot of qualms, because every step of learning to program
comes with English linguistic cultural baggage that foreign language learners have to
wade through.(Guo, 2018) This paper does not weigh in on the cognitive advantages of
programming in the localized languages, but as Liblit et al show, there are semantic
processes that work outside the inherent logic of programming when language is easily
accessible.(Liblit, Begel, & Sweetser, 2006) There is an abundance of literature that
show that, even though there are anachronisms in the perception of the semantics of
English keywords and the effect they carry out on the computer,(Du Boulay, 1986)
anglo-centric programming is still detrimental to learning. The problem of anglicizing the
programming process was critiqued at the early inceptions of computer programming.
Pioneering iconic programmers like Jean Sammet, Andrei Ershov, Grace Hopper all
advocated for the localization of programming languages, to varying degrees of critique
(Eastman, 1982; J. E. Sammet, 1966; Wexelblat, 1981, pp. 7–24).
16
To understand the role of technology in society, one must examine the pre-
technical mechanics of society as a way to reverse engineer the effects of artefacts on
society. Often what appears behind the veneer of the neutrality that is attributed to an
artefact is always exposed as a complicated configuration of historical factors to
normalize the infrastructures of power that only work to benefit a certain class of people.
A good conspiracy always appears neutral to those who experience its effects, either
positive or negative. Its execution is also not wholly dependent on its conspirators, as a
result, it is very difficult to establish its problems to those who thrive under its realization.
In fact, getting those who benefit from the success of a conspiracy to realise they are the
executors of longstanding insidious projects often requires so much energy that that
process in itself can only be properly executed through a conspiracy.
17
Chapter 3. Linguistic Imperialism
“A man who has a language consequently possesses the world expressed and implied by that language.”
- Frantz Fanon 7
This section explores the concept of Linguistic Imperialism and uses it to explain
the rise of “global” English. Just like the development of the computing industry, the rise
of global English is attached to the United States’ post Second World War imperial
project. However, this project benefited from the work already started by the British
Empire more than three centuries preceding the American Imperial project.
Though its function as a tool for communication can hide its political nature,
language as social technology is malleable. Imperial projects have never shied away
from using language as a core tool for political domination. Hence, language is not
apolitical, it serves various interests that go beyond communication and thought. It
serves the logics of power, control and domination. These are always rationalized in
various ways, from the inherent superior logic of dominant languages, to the ethnic
supremacy of its speakers, for nationalist projects, to the argument that modernity
demands the use of a particular language (Phillipson, 2012, p. 207). For example, the
French revolution was hinged on the idea that power belonged to the people. This meant
that the people, for the first time, had to deliberate openly to organize society. Here,
clear lines of communication became paramount to the functionality of democracy. While
there were lofty goals of organizing the multilingual dominion of the deposed monarchy
in an egalitarian fashion, the new authorities perceived major impracticalities in
maintaining a multilingual society. Multilingualism did not cause too much of a problem in
the autocratic monarchy. Mostly because participation in societal governance happened
through representatives of peoples’ language group, the village, the parish, or a few
designated government officials who could speak the language of the courts as well as
that of the people over whom they acted as the monarch’s surrogate. Once the
revolution nominally brought power to the third estate, democracy required an autocratic
language, French (Wright, 2012, p. 60). The spread of European languages usually
7 From Black Skin, White Masks (Fanon & Lam Markmann, 1986, p. 18)
18
follow this functionalist approach to organizing large multilingual domains. But its
domineering persistence and logics are apparent.
3.1. European Linguistic Expansion
European linguistic expansion is “is a direct consequence of successive waves of
colonization and the outcome of military conflict between rival European powers
worldwide” (Phillipson, 2012, p. 205). It is not the result of some inherent logic or
malleability of the languages or the technical and economic development of the nations
where the language originates. All these factors are a by-product of the colonial projects.
For example, the Japanese nation has experienced major economic developments since
the end of the Second World War which has come in lockstep with major indigenous
technological development. Hence, it should naturally be in the contest to become one of
the languages of international modern technology. Yet, this is not the case. Japanese is
a language spoken by a little above the 125 million people who inhabit the island nation.
This is because at the heart of the adoption of Western European languages in much of
the world is the colonial project (Baugh & Cable, 2002, p. 4; Phillipson, 2012, p. 208).
European Colonization (and Imperialism in general) works in various ways. The
main goal is economic dominance. It works through apprehension of the colonized’s
physical being for the purpose of this economic endeavor, as was practiced through
slavery. The second is the exploitation of the colonized’s natural resources such that
anything of value that once belonged to the colonized and their environment are now
used as the colonizer sees fit. There is the colonization of the mind, that subjects the
colonized into thinking about the world from the point of view of the colonizer. Finally,
there is the colonization of the colonized’s entire social system in service of the
colonizer. The colonial projects are branded with positive names; yet, these positive and
seemingly neutral assertions of American “freedom”, British “white man’s burden” and
French “civilizing mission” are not debatable as tragedies to any community that came
into contact with them.
3.2. The Spread of English
For the rise of English, up to 21 million people migrated from the British Isles all
over the world in service of the colonial project. This was bolstered by the development
19
of communication networks, railways and underwater cables, to connect different arenas
of the Empire to its center. England also became the trading hub for one third of the
world’s securities. This coincided with the effective conspiracy to prevent war amongst
European colonizers in the Scramble for Africa that was solidified by the Berlin
Conference in 1884 (Phillipson, 2012, pp. 207–209). This was the beginning of the
modern asymmetry of power relations between colonizer and colonized which has
carried over to the way languages are viewed and used.
English’s dominance came with the economic project to accrue more resources
to England. As Walter Rodney argues about the schooling system in colonial Africa,
“[t]he main purpose of the colonial school system was to train Africans to help man the
local administration at the lowest ranks and to staff the private capitalist firms owned by
Europeans.” (quoted in Phillipson, 2012, p. 213). Here the language is imposed on a
social structure such that it becomes the commanding and instructional directives
through which the empire is run. This is analogous to the way human language functions
in a computer. It is one of total dominance, in which the digital space is to run only in the
directives of its controller(Chun, 2011, pp. 19–31). With the political ascendance of the
United States, its policy makers began to see the project of anglicizing the world as
beneficial to the United States as well as the United Kingdom.
3.3. United States takes the Linguistic Helm
The concerted united effort by the U.K and U.S to Anglicize the world began in
the 1930s. After the first world war, various conferences and projects like the Carnegie
Conference of October 1934, were held as a collaborative effort between the UK and US
to make English the world’s lingua franca. The prospect of having an anglicized world
enticed Winston Churchill who argued that “The widespread use of this would be a gain
to us far more durable and fruitful than the annexation of great provinces.” (quoted in
Phillipson, 2009, p. 112). U.S. foundations also promoted research projects in Europe
especially in the interwar period; funding other international conferences, sometimes
with direct government support to share expertise on how to anglicize Europe, Asia and
Africa. The effects were immediate, for example, German scientists began to migrate at
such a large extent to the US before the Second World War began that David Gordon
argues that this changed the lingua franca of logic from German to English (Gordon,
1978, p. n47). The United States used its financial power through the Marshall Plan to
20
anglicize West Germany’s scientific infrastructure. It pushed for West Germany’s
collaboration with NATO (Gordin, 2015, p. 271; Phillipson, 2009, pp. 112–117). The
anglicization of the world effectively became an American project.
This international order functions to serve the American imperial agenda. The
levers of control work through the institution of global organizations created in the
Bretton Woods Conference, the institution of the International Monetary Fund (IMF), the
World Bank, North Atlantic Treaty Organization (NATO), the United Nations (UN), and
even the Rhodes scholarship (Phillipson, 2012, pp. 220–223). On June 11, 1965 Lyndon
Johnson, signed a statement to promote English worldwide, because there was a
“growing need for English”. As Gordon notes, this confirmed what was already an
obvious process, worked through organizations like US Agency for International
Development (USAID), Peace Corps etc. (Gordin, 2015, pp. 307–308; Gordon, 1978, pp.
49–50). Organizations whose colonial machinations have been soundly criticized by
those who have had to live on the receiving end of their lofty liberation agendas.
3.4. English and European Languages
The French and German language did not stand a chance when these two
powerhouses of imperialism had aligned their goals (Gordon, 1978, p. n49). For
example, the influence of the German language, after the second World War, in
academic circles was beginning to wane. Perhaps even under political assault. This was
officially legitimized when UNESCO voted against the use of German as one of its
official languages, clearly as a punishment for the wrongs attributed to Germany during
the Second World War. At about the same time, the Russian language started to
become the primary second language taught in Eastern Europe. It became “the most
important negotiating language of the socialist camp”(Gordin, 2015, p. 276). There were
continuous efforts to get Russian language into the German lexicon. But English had
already become a language of prestige. Gordin has shown that through a myriad of
financial, structural and physical (military attacks) gutting, major academic journals in
Germany and Europe, could not hope to compete with American run journals. This also
brought with it a continuation of the pre-War brain drain that characterized West
Germany. By the time the tide was stemmed in the 70s, the damage had been done and
German had become a solely a language for Germans only. Due to the repercussions of
21
the Nazi party’s actions in Europe, German policy makers have generally been reluctant
to promote the language abroad.
The French also perceived the French language as being on the defensive of an
onslaught of “Coca Cola imperialism” (Gordon, 1978, p. 44)(Ferguson, 2012, p. 487).
After the war, the writing was on the wall. French intellectuals and scientists lamented
the loss of French language prestige, not to English, but to American (Gordon, 1978, p.
47). The effects of this Americanization can be seen in how the dominance of English
goes unquestioned in various “neutral” spaces. The great Nobel prize empirically
rewards English speakers, regardless of the importance of the work being examined.
Even the Bandung Conference in 1955 chose the English Language has its one official
language! (Gordin, 2015, pp. 270–321)
3.5. Conclusion
The most neutral thing the English language had going for it after the second
World War was its adoption as a backlash against Nazi atrocity. In this vain, English
should also have been avoided, because of its participation in the Trans-Atlantic Slave
trade, its genocidal attempt at the destruction of Native American cultures and its
exploitation of the global south. However, the world functions on a short-term memory.
Here the dominance of English in programming has become hegemonic and
naturalized, interlocking elegantly, though unjustly, with other structures of imperialism.
Very much like liturgical languages which tend to stay intact regardless of changes in
colloquial language structure (Paulston & Watt, 2012, p. 336). Like liturgical languages,
access is bounded by the elitism structured in the reification of the colonial imaginary.
Linguistic decisions, seemingly “… individual choices, while not determined, are
constrained by an existing socio-economic environment constructed by British
colonialism, twentieth-century US dominance and contemporary forces of globalization”
(Ferguson, 2012, p. 479). The policies to promote English through World Bank and
USAID continue through to the twenty-first century, almost always at the expense of
local languages. This is not passive, it is an active project to destroy any possibility of
bottom-up globalization. As Lawrence Summers, former Chief Economist of the World
Bank, argues “‘the substantial investment necessary to speak a foreign tongue’” is not
“‘universally worthwhile’ given ‘English’s emergence as a global language, along with the
22
rapid progress in machine translation and the fragmentation of languages spoken
worldwide.’” (quoted in Gordin, 2015, p. 322).
23
Chapter 4. The Development of the American Computing Industry
“…[do] not conclude that because a space was present or absent that this is a requirement…”
- Jean Sammet.8
War, especially total war, brings with it new profound ontological realities. An
example is British territorial gains in North America from the War of Spanish Succession
and the Seven Years’ War which led to profound changes in the structuration of North
America (Phillipson, 2012, p. 206). By the time of the first major war for global colonial
restructuring (World War 1), the Entente perfected a conspiracy to prevent the parallel
German colonial project of setting up a world wireless network from taking hold. This
was done through impossible market requirements and the pulling of blunt colonial
levers to preventing the Dutch company Larsen & Co from setting up a wireless network
in China (Tworek, 2016). The loss of this war caused the meteoric drop of the German
language from global utility (Gordin, 2015, p. 7). This section shows the ways the United
States and its military institutions took the helms of a profound disruption in the global
order after the Second World War. By pushing and pulling political, social and economic
levers here, there and everywhere, the American military created a novel global reality.
This created the scenario such that to instruct the computer, one must think not in
English, but in American.
The success of the American computing industry is bolstered by many factors.
America was very much interested in building a “western” science in Europe that would
be aligned with the interests of the United States. Investments like the Marshall Plan
restructured scientific research away from the traditional centers of scientific prestige
towards the peripheries while encouraging publications in English (Gordin, 2015, p. 271).
From the 1940s to 60s the U.S. armed forces was responsible for the development of
the computer globally (Edwards, 1996, p. 43).“The absolute Allied victory supported a
vast new confidence in the ability of military force to solve political problems. The
occupation of Germany and Japan meant an ongoing American military presence on
8 Jean Sammet describes the syntactical choices made in her work on the history of programming languages (J. E. Sammet, 1969, p. ix)
24
other continents” (Edwards, 1996, p. 57). This meant heavy investment in the technical
capabilities of the military.(Schiller, 2008) This is what Sandra Braman describes, in the
foreword to Benjamin Peters’ How not to network a nation, as “the socialism of…
capitalist America” (Sandra Braman in Peters, 2016, p. ix). Peters further develops this
idea stating “[p]erhaps the strongest example of hierarchy and socialism in modern
America is also its greatest bastion of patriotism—the U.S. armed forces, whose
command-and-control silos deliver social services and benefits to its members” (Peters,
2016, p. 23). When compared to its allies and opponents, at the end of the Second
World War, The United States was left relatively unscathed. It had built an army in
response to the drums of war, reorienting its production capabilities and capacity and
then the war was suddenly over. It had built a massive infrastructure without any enemy
to fight. But the momentum of this infrastructure continued to function like the war was
unfinished.
4.1. The American Iron Triangle
In 1940 as the war was raging in Europe and Asia, Vannevar Bush, the American
inventor of the Differential Analyzer, through his immense social capital created the
National Defense Research Committee (NDRC). He hoped that it would be a civilian-
military research organization to help prepare for the total war dynamic of the new global
order. But the NDRC was prohibited from carrying out weapons research. As a result,
Bush created the Office of Scientific Research and Development (OSRD) which, by the
end of the war, had gained the confidence of the executive arm of government, spending
in excess of $100 million (about $1.5 billion in 2019 dollars)(“CPI Inflation Calculator,”
n.d.).
This was the not so humble beginnings of The Military Industrial Complex
described by Eisenhower which was then termed the Iron Triangle. The Iron Triangle is
the unholy cooperation between the military, the university and private business that
continued long after the war was waged (Edwards, 1996, pp. 46–47). After the war, the
system was reorganized nominally such that actors engaged in military research at the
universities started to form or join private corporations (Cortada, 2014, p. 71). The war
had allowed the American academic research system to align very thoroughly with the
military. This produced prolific results for the United States. During the war, the
computers were used to decipher enemy codes and calculate ballistic trajectories.
25
Though there were expectations of post-war expenditure constrictions, this did not
materialize (Edwards, 1996, p. 52). Proxy wars like the Korean War motivated the need
to keep developing this technology. This meant continued reliance on work being done
in the universities (Cortada, 2008, p. 7). For example ENIAC, “America's first full scale
electronic digital computer” was somewhat unreliable
“unable to store programs or retain more than twenty ten-digit numbers in its tiny memory, required several weeks in November 1945 to run the program in a series of stages. The program involved thousands of steps, each individually entered into the machine via its plugboards and switches, while the data for the problem occupied one million punch cards” (Edwards, 1996, p. 51).
Despite the complaints, it was very important, if not fundamental to the calculations that
refined the design of the Hydrogen Bomb. After, with knowledge from the work done on
the ENIAC, improvements were made for the next computing machine, the EDVAC. This
ran by storing instructions in the computer’s internal memory. The direct architectural
ancestor of the kinds of computers we now use.
After the war, both Truman and Eisenhower were able to pursue their neo-
expansionist goals in novel ways. Vannevar Bush pushed the idea for another research
organization. He desired the creation of a civilian run military research organization. But,
this idea of having a civilian directed Naval Research Foundation was not followed
through the way he proposed. The military run Office of Naval Research (ONR) was
created instead.
“By 1948 the ONR was funding 40 percent of all basic research in the United States; by 1950 the agency had let more than 1,200 separate research contracts involving some 200 universities. About half of all doctoral students in the physical sciences received ONR support. ONR money proved especially significant for the burgeoning field of computer design. It funded a number of major digital computer projects, such as MIT's Whirlwind, Raytheon's Hurricane, and Harvard's Mark III” (Edwards, 1996, p. 60)
The success of the Iron Triangle at home was carried abroad as part of the Marshall
Plan as discussed at the beginning of the chapter. (Edwards, 1996, pp. 51–60).
26
4.2. Coming to Europe
The rise of the United State was hardly inevitable, the previous section goes a
long way in showing what other countries had to compete with in terms of infrastructural
and economic organization. This section touches lightly some of the critical
developments that pushed the rise of the American variant of computing technology.
Compared to the United States, the British Government, due to the financial burdens of
post-war reconstruction could not develop its computing industry at the same rate. So
that despite being “ahead” in computer manufacturing and research at the end of the war
– “by 1965 more than half of computers in Britain were U.S. made” (Edwards, 1996, p.
62). This was also true for the other European powers and Japan who were still reeling
from the effects of the war. In contrast, the U.S had an expanded military without any
enemies to fight, this fueled Cold War anti-Russian, anti-communist sentiments that
translated to technical economic policies. The U.S. only effecting the Monroe Doctrine
within the confines of its continent for the past century, finally had an opportunity to test it
on a technical European stage (Edwards, 1996, pp. 54–62).
Large European economies could not scale their markets to compete with the
bulwark of the American business machine. The British consumer market was too small.
It prevented the establishment of a comparatively a strong computing industry. France
was not able to fight what it saw as the American infiltration of IBM (Cortada, 2008, pp.
8–10). By 1964 Bull, its leading computing manufacturer, lost French control to General
Electric (GE) shareholders (Tatarchenko, 2011). Smaller European countries, like
Finland and Czechoslovakia could not hope to compete with any of the ideologically
entangled hegemonic states. Finland, lost its independence to the influence of IBM
computers in the country to the extent that in the 1950s, computers were known as the
“westernizing machine” to indicate the political choice made by Finnish policy makers
(Paju & Durnová, 2009).
With all the proxy wars for dominance during the Cold War it is difficult to see the
computational arms race that was ongoing in tandem. As Paju and Durnová describe it
“the cold war was… a technological battle for supremacy” (Paju & Durnová, 2009, p.
303). The United States carried out a dedicated government sponsored development of
the computing industry
27
“even after mature commercial computer markets emerged in the early 1960s, U.S. military agencies continued to invest heavily in advanced computer research, equipment, and software. In the 1960s the private sector gradually assumed the bulk of R&D funding. IBM, in particular, adopted a strategy of heavy investment in research, reinvesting over 50 percent of its profits in internal R&D after 1959” (Edwards, 1996, p. 63).
IBM, for example, was not willing to produce the IBM 701, a commercial computer,
without letters of intent from the Department of Defense. This was not restricted to IBM,
companies like Eckert & Mauchly (founded by members of the ENIAC project), Bell
Labs, Engineering Research Associates were given continuous support through various
government, specifically armed forces contract,
“the major corporations developing computer equipment—IBM, General Electric, Bell Telephone, Sperry Rand, Raytheon, and RCA—still received an average of 59 percent of their funding from the government (again, primarily from military sources). At Sperry Rand and Raytheon, the government share during this period approached 90 percent” (Edwards, 1996, p. 61).
IBM was particularly ingenious in its utilization of these handouts. It was able to
reshuffle itself in such a way that made it look European, while carrying out American
imperial interests in Europe. IBM administrators relished “[b]ringing European citizens
together with a common purpose… in a world where neighbors had so recently been
exhorted to kill each other.” (Paju & Haigh, 2016, p. 295) IBM was run by Thomas
Watson, who had a theory of Universalism that seemed to precede the Thomas
Freidman’s McDonalds’s theory.9 As Paju and Haigh describe “Watson was an autocrat
with a weakness for pomp and an enduring need to ingratiate himself with the politically
powerful” (Paju & Haigh, 2016, p. 296). It was only in the 1960s that the Europeans
began to pushback against the continental dominance of the company.
The investment of the U.S. Military in these projects cannot be understated, it
invested half of the money used to develop Integrated Circuits, especially for Air Force
projects. Organizations such has DARPA maintained more than just an interest in the
field, contributing to “…artificial intelligence, semiconductor manufacture, and parallel
processing architectures, in particular directions favorable to military goals.” (Edwards,
9 that “two countries with branches of McDonald’s will never fight each other.” Which was falsified when Russia invaded Georgia (Paju & Haigh, 2016, p. 269)
28
1996, p. 64). How does one stop this juggernaut that is American Imperialism?
International alliances presented a way to deal with this. But, this too was fraught with
American booby traps.
4.3. United States of European Countries
Despite the rancorous political history that Europe had, there had been
continuous inclinations to prevent the reification of its differences. This is usually seen
through the relationship of the continent to the Latin language from the pre-industrial
Roman Empire (Liu, 2018, pp. 11–24). Science historian Michael Gordin has argued that
this has largely been an imagined understanding of the political and intellectual realities
of the language (2015, pp. 23–49).
As early as 1946, UNESCO began work on creating an International
Computation Centre (ICC). U.S. policy makers agreed with the drive, only on the
condition that the center not build computers that could rival those in the United States.
The U.S. wanted to bolster its commercial computing industry. This was in line with
American economic and political interest in Europe. The U.S actively worked to make
UNESCO enact its foreign policy agendas nudging it to adopt an anti-communist
outlook. Because it was the largest contributor to the organization, it was able to make
such ridiculous demands (David Nofre, 2014, pp. 413–426). The US demanded that the
ICC be situated in Europe. The ICC was to be funded by IBM and the Rockefeller
Foundation, which was very much an enactor of the American Empire as strongly as
IBM was, perhaps even more (Krige, 2006, pp. 71, 75–151). American imperialism is not
neutral, it has always been tied to the levers of corpo-military powers. U.S policy makers
played ideological roles in UNESCO and the post-war development of science in
Europe. Krige posits that though these people were not on state department payroll,
“they certainly shared the values of the liberal internationalist wings… and worked closely with them. Power, policy, and ideology were fused in these actors; they were individuals in their own right but also bearers of a widely shared, though not universal, conception at home of America’s role and responsibilities in the postwar world order.” (Krige, 2006, p. 258)
The fear of a popular communist take-over of the western block, made U.S policy
planners resolve to align Western Europe with “Washington’s field of Force” (Krige,
2006, p. 254). The launch of the Russian Sputnik I, led to the Americans making a
29
decisive effort to involve itself in European projects(Tatarchenko, 2010). Small countries
doubted the validity of the internationality of the organization and decided to turn
inwards. Several European countries responded by developing their own national
computing industries regardless. This in turn was often much more amenable to
American interests, due to their smaller sizes (David Nofre, 2014, pp. 424–431)(Helena
Durnová, 2014; Lewis, 2016; Paju, 2008; Paju & Haigh, 2018).
Hence when, Heinz Zemanek, two decades after the war, as the head of the
International Federation for Information Processing (IFIP) from 1971-1974 and the head
of IBM’s Vienna laboratory declared the IFIP an “international cooperation,” it was an
example of the blurred lines that allowed “the scientific mode of diffusion [to coexist] with
the business mode” in the computation industry of Europe. Several international
organizations were floated through the American National Joint Computer Committee.
They were created in the light of the financial advantages that could accrue to American
companies like IBM and Remington Rand as they supported such a project in Europe.
(Tatarchenko, 2010, pp. 46–51). The linguistic imperial project worked in tandem with
the computing imperial project. They came together in technical space during the
creation of the ALGOL programming language.
4.4. Programming Language and Natural Language
Before going into how ALGOL came to be, this section examines developments
in the history of the computer generally and programming specifically. The computer is
thought to consist of two parts, the physical hardware and the intangible software part.
The hardware consists of a memory for short term storage, a central processing unit for
carrying out arithmetic logics and operations, a control unit that executes the instructions
given to the computer, and input and output system to ease communications with the
device. This is commonly called the von-Neumann architecture named after John von-
Neumann who proposed its design after working with the ENIAC computer. ENIAC was
programmed externally with switches and wires. The von Neumann architecture led to
the internalization of the instruction process. Von Neumann’s architecture allows the
modification of the computer to perform new tasks, because instructions and data are
stored on the same memory stack. Instructions are represented as binary data in the
computer’s memory, instructions can be updated depending on feedback from
operations on the computer’s data. Hence the computer is not necessarily restricted to
30
the physicality of reprogramming that characterized its original design. The digital
computer processes instructions in sequential formats using a sequence of 1’s and 0’s
based on the two states of a transistor. The previous iteration of computers, analog
computers - large unwieldly devices, functioned through the use of vacuum tubes, gears,
disks etc.
Vannevar Bush’s differential analyzer used to calculate sixth order differential
equations, worked through a combination of gears, shafts, wheels, disks and stored its
variables in capacitors. In the 1950s William Shockley discovered the application of
transistors while working at Bell Labs. This would eventually lead to development in
integrated circuity technology. The first digital computers used vacuum tubes to
represent their binary data. He founded Shockley Semiconductor in 1955 and some of
his partners on those projects went on to form the Fairchilds Semiconductor. Lisa
Nakamura examines the exploitation of Native American women in the building of some
of the first circuits (Nakamura, 2014). Fairchilds would go on to inspire the creation of
Intel. Founded by Robert Noyce and Gordon Moore (whom the concept of Moore’s Law
is named after - where the number of transistors per unit area doubles every 2 years.
This allows for more circuitry to be designed onto a wafer leading to smaller and faster
computers). (O’Regan, 2012, pp. 23–35).
4.4.1. Development of High-Level Programming Languages
Jean Sammet uses the deprecated IFIP-ICC definition which “is a set of symbols
and rules or conventions governing the manner and sequence in which the symbols may
be combined into a meaningful communication.” Donald Knuth’s describes a
programming language, as a way of “telling another human being what one wants the
computer to do”(Scott, 2009, p. 10). Hence, the programming language is the means by
which a human being can communicate with a computer. Hence for the language to be
effectively utilized, all the parties involved must be able to use the language to
communicate with each other. The digital computer only speaks one language, i.e.
binary. Every level outside that is part of a process to tailor it to human convenience.
There are generally three waves in the historical development of programming
languages – machine code, assembly and high-level languages. High-level
programming languages are to help a computer programmer interact with the machinery
31
of the computer as they are the closest to natural human language. Hence, one can
remain ignorant of the underline hardware of the device and still be able to write
programs for it.
First generation programming, i.e. binary programming or its base variants – e.g.
octal, hexadecimal are the most direct ways to instruct the computer. Its executes
commands fast! It is not held back by interpretation and compilation burdens. The only
problem is writing code in binary, especially with the hind sight of high level languages
can be really difficult and time-consuming. The time gained in executing a fast program
is often lost trying to come up with compliant binary code. For first generation computers,
codes were added through panel switches which was very error prone (O’Regan, 2012,
pp. 121–144; Wexelblat, 1981, p. 2). Then came the development of machine language
programming. Machine language programming looked something like this CLA
00010010000... Then came the development that allowed register locations to be written
as numbers, hence CLA 00010010000… would then become CLA 64. Then came the
innovation of relative/regional addressing. This meant that a program, written in this
binary-decimal-coded form, could be divided into sections and each section in memory is
now given relative to the starting location of the code. Then came, “the development of
completely symbolic notation and addressing for both instructions and data… [which]
freed the programmer from worrying about changing” minute details of code every time a
new program was written. Programmers could write CLA TEMP, where TEMP would be
the memory location of a variable e.g. Temperature. This combination with symbolic
addressing (like in TEMP) is what is known as Symbolic Assembly Program, as
implemented in the IBM 704 (J. E. Sammet, 1969, pp. 1–26).
Computers at this point were shared devices, hence the development of
timesharing systems, a precursor to the concept of operating systems. In which multiple
programs can use the processor of the computer depending on various factors. This
often creates the illusion that the computer is not processing things in a sequential order.
However, in using these shared machines, people also noted they were rewriting the
same codes and code sequences again, which they would reuse. Not only that, but they
wanted to be able to reuse the codes that had been developed by their colleagues.
Usually the rewritten codes would be tailored to the requirement of each programmer’s –
precision, speed, storage – concerns. This led to the development of libraries within
which a programmer could easily choose the subroutine they wanted to run their data.
32
This was followed by assembly coding, which was more readable and
understandable for human beings to navigate. Programs written in assembly are
converted by an assembler into the machine code that corresponds to the device that it
has been written for. The disadvantage here is that every device and processor is
different and as a result the machine language required to make it work would differ from
device to device, hence it is averse to portability. This led to the development of
Automatic Programming. Automatic coding, like assembly, is tailored to the intricacies of
each individual machine and as a result is not portable. However, this led to the
development of one of the first major programming languages, FORTRAN (Formula
Translation). Created by John Backus for the IBM 701 who saw the work done in
Automatic coding by programmers for the Whirlwind computer at MIT (David Nofre,
Priestley, & Alberts, 2014, p. 50).
High-level languages are easy to read. They have clear syntax which allow for
easy application across a variety of fields and problems depending on the design
prerogatives and problems outside its immediate design prerogative, as exemplified by
the industry wide adoption of C. Despite its original purpose as a tool to design the UNIX
OS (McPherson, 2012; Ritchie, 1993). Perhaps the earliest description of a high-level
language was Plankalkül by Konrad Zuse, who developed it during the Second World
War. The most popular programming languages currently are imperative languages, that
is they give commands to the computer that result in a “change of state”(O’Regan, 2012,
p. 124). There are also functional languages like Racket and Scheme, which are dialects
of LISP. They are collections of functions that don’t result in a change of state for the
computer. Logical languages like Prolog are only concerned about what is to be
programmed not how it is programmed. All these languages use keywords that define
and structure the way the computer responds to instructions.
Reserved words and Keywords are words that have direct mapping to the
computer in terms of converting a program from the programming language to machine
language. Their commands have direct implications for the running of the program (J. E.
Sammet, 1969, p. 73). They are logical constructs and their semantic meanings relate to
what the computer is expected to do. ALGOL is at the root of high-level programming
languages. It led to the development of imperative programming. Most imperative
languages are direct descendants of the language, or descendants of responses to the
failure of this language.
33
4.4.2. Programming as Language
At the end of First World War, American mathematicians who had done
computation work for the US military like Robert Weiner, began to develop the field of
cybernetics. The field adopted a wholistic approach to the study of nature and sciences
inspired by the possibilities they thought the computer could create. It combined
concepts from the physical sciences, biological organization, organic feedback loops,
computation and sought to create autonomous well-ordered machines. Programming
hence became “part of a cybernetic discourse that described modern computers as if
they were semi-autonomous, almost human-like agents” (David Nofre et al., 2014, p.
41). Even Babbage the 19th century inventor of an analytical engine,on par with the
modern computer in terms of desired functionality, took the risk of describing his
analytical engine in linguistic terms, deciding that the descriptive benefits far outweighed
the misconceptions that might accompany them.
“By abstracting away from the machine, programming languages, more generally, software came to mediate our understanding of what a computer is, leading eventually to the conceptualization of the computer as an infinitely protean machine” (David Nofre et al., 2014, p. 44)
The term programming language originates in the user groups that formed
around the owners of commercial computers in the 1950s. User groups were formed by
customers of a particular computer, who shared resources, including codes, standards,
libraries, subroutines, and tips to help get the best from their computer. The most
influential was the SHARE User Group, consisting of defense contractors, national
security institutions, insurance companies etc. that came together when IBM started
selling the IBM 704 computer (David Nofre et al., 2014, p. 62)The term Operating
System also comes from language amongst the SHARE User group member sometime
in the 60s (Bullynck, 2018, p. 21). Programming languages were thought of as a way to
have computers “conversing” with each other.
There were various factors outside the idealism of human-computer interaction
and computer-computer interaction. The rising cost of maintaining programming
languages. For the early computers, there was never a need to bother about
programming on multiple computers, there were very few of them then. It was the
military, funding most of the research and use of the devices, that encouraged
communications among the different groups working on the device. So people like von
34
Neumann and Herman Goldstine working on ENIAC and EDVAC could communicate
with Maurice Wilkes in Cambridge, England about the development techniques used in
autocoding. And Grace Hopper could communicate with others about the success of her
A-0 compiler.(David Nofre et al., 2014, pp. 43–49; Wexelblat, 1981, pp. 7–19)
Automatic coding, which preceeded, high-level programming languages, was
tailored to the intricacies of each individual machine and as a result was not very
transferrable. Autocoding led to the development of one of the first major programming
languages, FORTRAN (Formula Translation). Created by John Backus for the IBM 701.
It was inspired by the work done in Automatic coding by programmers for the Whirlwind
computer at MIT. Fortran was a watershed moment because “it changed programming
from almost electronics into human activity” (Parker & Davey, 2012, p. 167). By the mid
1950s with the explosion of commercial computers, the values/advantages of
specialized computers and their unilingual properties began to die down because
companies were upgrading their machines regularly. Companies started having
problems of manpower and retraining with the computing facilities they worked on. There
were also concerns about who was designing the flurry of languages that accompanied
the rise of commercial computing. John Carr who would become the chairman of the
ACM was in charge of the MIDAC computer at the University of Michigan. His biggest
concern was the heavy involvement of industry in the process of designing these
languages. He was interested in mistake free coding that would not involve too much
human input. For ease of pedagogy and preventing industry monopoly of programming
languages. Derrick Lehmer of University of Carlifornia in Berkeley also had this concern,
especially in light of the vast contribution the U.S. government had put into the
development of the field.(David Nofre et al., 2014, pp. 50–54).
It was a combination of the practical problems and idealistic questions of
computer communications that led to the development of ALGOL. Helena Durnova
argues the ALGOL movement had parallels to the Esperanto movement. The idealism
involved led to the hope of creating something that would “allow for a program designed
in Zürich to be immediately legible in Darmstadt”(Gobbo & Durnová, 2014). However,
with American involvement, these ideals tended to become aligned with American
interests. As Nofre notes “internationalist initiatives were aimed to serve American
science and foreign policy, while reinforcing Western scientific cooperation”(2010, p. 63).
ALGOL was “the outcome of complex processes of definition and negotiation carried out
35
within and between different kinds of institutions and computing communities” (D. Nofre,
2010, p. 58). This sheds light on how social institutions continue to shape technical
realities. By examining the process by which ALGOL came to be, some of the levers and
institutions discussed in previous chapters pop up to influence the shape that
programming language took.
4.5. Linguistic choices: ALGOL, SIMULA, C, C++
In October 1955 mathematicians from West Germany, Frederich L. Bauer, Klaus
Samelson and their Swiss colleague Heinz Rutishauser believed that a single universal
programming language would help to bridge the “tower of babel” that was beginning to
develop in the world of automatic programming. As members of the German Society for
Applied Mathematics and Mechanics (GAMM), they contacted the American Association
of Computing Machinery (ACM) which had gone through repeated attempts to create
such a language. In May 1957 User groups had met with the ACM in Los Angeles to
begin description of a common language. The intention for such a language had been
made publicly. When the ACM got contacted by the West German- Swiss alliance,
compounded by the recent success of Sputnik, the ACM latched on. In a letter to John
Carr the chairman of the ACM, GAMM argued for “input language for a large class of
computers both in Europe and the United States”(D. Nofre, 2010, p. 63). GAMM was
also inspired to send this letter because, two mathematicians, Bottenbruch and Bauer,
had just completed a tour of the U.S’ computing facilities sponsored by the Office of
Naval Research (ONR). (D. Nofre, 2010, p. 63; David Nofre et al., 2014, p. 62)They
immediately also acceded to the use of English as a way to entice American support (A.
J. Perlis & Samelson, 1959, p. 42).
Thirteen members of the two organizations met between 27 May and 2 June
1958 in Zurich. The thirteen included names like, John Backus who created the
FORTRAN language and Alan Perlis of IT language. They came out with a preliminary
report described as the International Algorithmic Language (IAL), which was renamed
ALGOL, and more popularly known as ALGOL 58. The diverse nature of computing
hardware made the task of creating a universal language very difficult. The input
methods for devices varied largely. Some with excess character sets some with limited
ones. Yet, they hoped the language would have the seeming neutrality of the
mathematics in the natural sciences (David Nofre et al., 2014, pp. 64–66). For the
36
success of the meeting in Zurich, the West-German mathematicians acceded that
natural linguistic questions would automatically fall to English whenever a there was
conflict. (A. J. Perlis & Samelson, 1959, p. 43) This did not always go as planned. The
difficulty of this process is best illustrated by an argument in Zurich between two
researchers deciding how the decimal point will be represented in this language. The
German team insisted on the comma, while the American team insisted on the dot, it
was in the midst of this discussion that a technical compromise was made to allow the
use of both (Wexelblat, 1981, p. 80).
The committee resolved this problem by outlining three levels of programming
language definition - The reference level, the publication level and the hardware level.
The reference level, basically the conversational language used to describe the
programming language. The publication level – documentation language meant to be
bias free and semantically neutral, and the hardware level - more specific to the
hardware involved. The hardware level was defined as “a condensation of the reference
language enforced by the limited number of characters on standard input equipment” …
“and uses the character set of a particular computer and is the language accepted by a
translator for that computer” (A. J. Perlis & Samelson, 1959, p. 43). Once the document
was published, people interested in the project began to write compilers for the language
on the computers they worked with. The first report on ALGOL, was supposed to be
preliminary. It did not address the question of inputting and outputting. The resulting
document in the Preliminary report on an International Algorithmic Language, was
generally well received. Pete Naur joined the Algol project from Regnecentralen in
Copenhagen, in which he initiated the Algol Bulletin after attending the conference on
the language in February 1959.(H. Durnová & Alberts, 2014) With the variety of natural
languages, input and output devices, expressions of ALGOL sprang up. People could
publish their implementations of ALGOL and raise questions about decisions that were
made at the meeting. Subsequently after about a year of discussions, the committee met
again Paris in January 1960 through UNESCO to standardize the language. To solve
some of the communication problems, John Backus and Peter Naur introduced a system
of describing languages know as the Backus-Naur Normal Form. It is a method of
describing languages in context free grammars, which linguist Noam Chomsky also
developed parallelly. This led to the publication of ALGOL 60. However, the American
User groups no longer supported the project. It started to become apparent that
37
European delegates tended to aim for creating a Universal Language, on par with the
function of Arabic numerals and mathematical symbols. However, the American
delegation was much more interested in creating what would amount to a universal
translator. As a result, support for the language fell in America, especially amongst IBM
user group SHARE which had been one of the most influential computing organizations.
IBM was also more interested in pushing their language FORTRAN. This was
detrimental to the adoption of the language for universal computing. (D. Nofre, 2010, pp.
58–60). In various meetings where decisions on the language was made about ALGOL,
American researchers repeatedly advocated, requested and demanded for the use of
English (Bemer, 1968, p. 206,209; Rosen, 1964, p. 6; Wexelblat, 1981, p. 80).This is
because factors outside the immediate technical solutions affected the creation,
adoption and use of ALGOL. ALGOL was very popular in Europe (Alberts & Daylight,
2014; H. Durnová & Alberts, 2014)(Bauer, 2012). This was not enough to keep it alive.
The influence of American industry affected its gradual demise. it led to the
creation of other languages that became mainstays in industry and academia. For
example, the first Object Oriented language, SIMULA developed in Norway by Kristen
Nygaard and Ole-Johan Dahl was a result of the work done in the ALGOL project. Krige
showed, the language was created in service of U.S. antisubmarine weapons system for
the NATO headquarters (Krige, 2006, pp. 241–242). This is important when considering
the popularity of C++, which was developed by Bjarne Stroustrup while working for Bell
Labs. Stroustrup got introduced to SIMULA in Aarhus University in Denmark. The
language had become a mainstay in Scandinavian computing. He also got introduced to
another ALGOL descendant (BCPL which inspired the C language) in England when he
studied at Cambridge.(Computer History Museum, 2015) He sought to combine the
speed of C and the object-oriented power of SIMULA. But as has been discussed, these
languages were closely tied to American funding that by the time C++ was being
designed in the 80s, the decision to use English had already been made at the first
international conference to design ALGOL in the 50s (Ritchie, 1993; Stroustrup, 1994,
pp. 133–153). Similar funding structures, through DARPA, tied Python programming to
the English Language (Rossum, n.d.).
The development of English as the lingua-franca of programming is not solely the
result of American pioneering strides in the development of the computing industry. The
development of the computing industry is also not a neutral “free market” business
38
project. It was a purposefully directed project to cement American domination of
European politics at the end of the Second World War. This I believe is the essence of
the computer. As Winner and Manovich argue correctly, it is difficult to refute that these
artefacts, despite their limitless possibilities, have inherent properties that make them
work perfectly for the social hierarchies where they are employed. Updating the
computer to adopt multilingual standards is tantamount to updating the New York traffic
system as set up by David Moses to prevent the development of public transit networks
which cemented the immobility of poor people in New York, usually minorities. The
amount of work needed to undo these technical realities cannot possibly match the
amount of work done to institute the artefact. The computer being an obvious
authoritarian device at its institution in the 1940s was adapted to the budding global
project that was to accompany the American cold-war project. Hence, the preservation
of English in programming, despite the possibility of alternatives, is an indication of the
way the authoritarian nature of this “inherently democratizing” device is preserved in its
miniaturization and distribution.
39
Chapter 5. Conclusion
“Ọ̀nà kan ò wọjà”
– Yoruba Proverb12
Our world has now been heavily computerized and we should not expect that this
would stop anytime soon. Computational abstractions have become more than;
computation has become a thing (Finn, 2017, p. 2). It affects how we live and how our
institutions function. UPS drivers, in service to the algorithms that track their speed, toss
deliveries to save time on their routes. High Frequency Traders (HFTs) itch to get their
computers closer and closer to centers of arbitrage, so as to edge out their competitors
(Finn, 2017, p. 12,51). People work, post and comment, to stay visible to the algorithms
of social media sites so that they are not algorithmized away from their social groups
(Bucher, 2012). This world of the algorithm is so abstract it constitutes another
dimension. The formal way to move the levers of this dimension is through
programming, which we have been implored to learn, to empower ourselves (Finley,
2013; Vee, 2017). The English language has successfully mediated this interface of
empowerment. Feenberg posits that “the democratization of our society requires radical
technical as well as political change” because “modern forms of hegemony are based on
the technical mediation… of social activities”(1995, p. 4) Following Fabian Prieto-
Ñañez’s prompt to look into how post-colonial histories of the digital “offers a way to
understand structures of social power, infrastructures, assemblages, and political
economies that create the conditions under which techno-scientific objects are created
and used”(2016, p. 3), this essay showed how algorithmic computation is skewed
towards traditional power, made visible through the preponderance of the English
language in that technical space.
The emancipation touted by this technology is overshadowed by its existence in
a world that is already organized to exploit certain peoples and systems. It ignores the
idea that power has been established through extradigital means. That our means of
communicating with the computer is the product of unequal socio-political and technical
relations that are a result of various structures colonialism, war, imperialism and
12 Interpretation: There is never one solution to a problem, one route to a destination, one story of success. Literal meaning: “One way does not lead to the market”
40
reactionary universalism etc. that have now been decentralized with the distribution of
computer power. Without us necessarily being aware, millions of decisions are made
every moment by codes written in various programming languages. These decisions, are
based on the intuition of programmers – whose works are by no means scientific.
Galloway posits that “code is the first language that actually does what it says” (2004,
pp. 165–166) The ones who have the power to communicate with the computer achieve
an elevation of status. Chun says we incur a “fantasy of an all-powerful programmer, a
sovereign… subject who magically transforms words into things” (2011, p. 19) And as
Margaret Benston reiterates the assumption that technical know-how gives us control
over technology is a very dubious one(1983, p. 22). We must “[r]ecognize that many
goals in …society involve repression which masks as opportunity” (Stuckey, 1991, p. 61)
5.1. Parallels of Emancipatory Rhetoric
The architects of the pre-cursor to our contemporary international financial
structure met at Bretton Woods in 1944. They imagined an international monetary order
that would break the boundaries set by nationalism. Under this cover, the International
Monetary Fund (IMF) was formed. It was to address the developmental gap between the
economy of core countries and periphery countries.(Helleiner, 2017, pp. 155–156;
Wallerstein, 2004). At the time of the creation of this solution, the British and the French
empire represented their colonies. Colonization, though unpopular, was normalized. Any
viable solution to the economic disparities created by the world system at that juncture
had to address the injustices of colonization. Colonists represented a one-sided
discussion in this deliberation. Without an acknowledgement of colonization and its
postcolonial realities, there was always going to be discontent and inequity for those who
had been entrapped by institutions outside their reach. As a result, the IMF,
procedurally, became a tool, social technology, to liberalize and marketize the
colonization of the third world. The international monetary order did not start with the
assumption that the playing field has been skewed by decades of exploitation through
colonialism. The role of language in the creation of the IMF may be abstract, but the
interconnections of language and religion are clearer.
When a new religion is introduced to a society, it is important that the language of
that society is ported into the religion. This provides a means to have locals assume
ownership of various rituals associated with the religion. For example, in Nigeria the
41
bible was translated to local languages for easier access to the technology of religion. By
interpreting the bible, there was more access to the purpose behind its messages such
that local actors, though beholden to the new colonial structures of power, could easily
convey ideas to people who could take the message to such interesting heights that it
has been argued that the only places that a strong belief in the technology of religion
exists today are countries in post-colonial chokeholds like Nigeria (Akintoye, 2010).
However, there is no real need to translate the tenets of the text, as it is not a
requirement for following the rituals of a religion or accessing its spirituality. This was
particularly true of pre-Reformation churches in Europe, where only the clergy held an
understanding of the Bible due to their access to the language it was written in - Latin.
Reformers abhorred this. The religious populace did not have control or access to the
artefact which produced the operating system of their lives. They had no access to the
language it was written in. Their access depended on an elite class who had mastered
Latin, having proved worthy of the church, and were given access to the language. In
this sense the Reformation represents an anti-universalization movement meant to
restore balance in the face of the injustices faced by the populace. While language was
not at the core of Reformation demands, it exposed the entrenchment of the oppressive
regime that ruled people’s lives.(Muir, 1997)
5.2. Points of Control and Access
These examples illustrate the fundamental result of our anglo-programming
reality, control and access. With the seeming ease with which English in programming
“[c]ombining as it does commerce and the iconic, such a register allows for a power that
is manifest as it is abstract” (Raley, 2003, p. 305). For the powerful American state, this
is a project that manifests through access. In 2010, when the United States felt it needed
to undemocratically deter Iran’s nuclear enrichment program, it exploited “backdoors” in
the Windows Operating System. Its intelligence services, wrote a program (a virus
named Stuxnet) that gained access to an Iranian computing network, thousands of miles
away from the United States. With access to the network, it overspun the centrifuges of
the reactor till they were destroyed. The virus was able to reach the centrifuge’s
computer through a USB it was loaded in. The centrifuges’ entire software architecture
was based on American product, Windows, written in an “American” programming
language, C/C++. Stuxnet was not limited to large devices, it was designed to function in
42
the smallest devices like printers and scanners(Collins & McCombie, 2012, pp. 84–86;
Fisk, 2012, pp. 171–172; Keizer, 2010). In 2013, Edward Snowden revealed the NSA’s
PRISM program which showed that the U.S. had created a backdoor to internet
companies so that it could collect data that went through their servers. Its targets ranged
from citizens of unfriendly countries to leaders of allied countries. The implications for
privacy, national security and sovereignty are tied to the foundational architecture of this
technology that has already been colonized by the United States (Lyon, 2015)
5.3. English and Technical Democracy
As Margaret Benston argues In “…computer systems, the [user] has extremely
limited terms of interaction with the system, terms that are predefined by someone
else.”(1983, p. 20) these terms are “logic of the people who own and control the
economy”. (1983, p. 22) This essay traced its root from colonialism and linguistic
imperialism. Where linguistic imperialism “presuppose[d] an overarching structure of
asymmetrical, unequal exchange, where language dominance dovetails with economic,
political and other types of dominance” (Phillipson, 2009, p. 2). In this case, dominance
in a computer programming space fixes “global English as a precursor network and
medium of late twentieth-century communication” where ” computer languages maintain
a parallel currency and legitimation” for traditional power structures (Raley, 2003, p.
307). As Feenberg argues, “[u]nless democracy can be extended beyond its traditional
bounds into the technically mediated domains of social life, its use-value will continue to
decline, participation will wither, and the institutions we identify with a free society will
gradually disappear”(1995, p. 4). These institutions in our pockets, on our bodies,
controlling our cities, how we make friends, learn, create, work, participate are already
working towards a socially predetermined reality from the fundamental way it has been
designed and how we communicate with it.
43
References
Akintoye, A. S. (2010). A History of the Yoruba People. Dakar: Amalion Publishing.
Alberts, G., & Daylight, E. G. (2014). Universality versus Locality: The Amsterdam Style
of Algol Implementation. IEEE Annals of the History of Computing, 36(4), 52–63.
https://doi.org/10.1109/MAHC.2014.61
Banerjee, R., Liu, L., Sobel, K., Pitt, C., Lee, K. J., Wang, M., … Popovic, Z. (2018).
Empowering Families Facing English Literacy Challenges to Jointly Engage in
Computer Programming. Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems, 622:1–622:13.
https://doi.org/10.1145/3173574.3174196
Barney, D. D. (2004). The network society. Cambridge ; Malden, MA: Polity.
Bauer, F. L. (2012). Die ALGOL-Verschwörung. Informatik-Spektrum, 35(2), 141–149.
https://doi.org/10.1007/s00287-011-0585-0
Baugh, A. C., & Cable, T. (2002). A history of the English language (5th ed.). London:
Routledge.
Bemer, R. W. (1968). A Politico-social History of Algol: With a Chronology in the Form of
a Log Book.
Benston, M. L. (1983). The Myth of Computer Literacy. Canadian Woman Studies, 5(4).
Retrieved from http://cws.journals.yorku.ca/index.php/cws/article/view/13179
Bergin, T. (2007). A history of the history of programming languages. Communications of
the ACM, 50(5), 69–74. https://doi.org/10.1145/1230819.1230841
Billig, M. (2010). Banal Nationalism. London: SAGE Publications Ltd.
Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility
on Facebook. New Media & Society, 14(7), 1164–1180.
https://doi.org/10.1177/1461444812440159
44
Bullynck, M. (2018). What Is an Operating System? A Historical Investigation (1954–
1964). In L. De Mol & G. Primiero (Eds.), Reflections on Programming Systems:
Historical and Philosophical Aspects (pp. 49–79). https://doi.org/10.1007/978-3-
319-97226-8_3
Chinweizu. (1975). The West and the Rest of Us: White Predators, Black Slavers, and
the African Elite. Random House.
Chun, W. H. K. (2006). Introduction: Did Somebody Say New Media? In New Media, Old
Media: A History and Theory Reader (1st ed., pp. 1–10). New York: Routledge.
Chun, W. H. K. (2011). Programmed Visions: Software and Memory. Cambridge,
Massachusetts: The MIT Press.
Collins, S., & McCombie, S. (2012). Stuxnet: The emergence of a new cyber weapon
and its implications. Journal of Policing, Intelligence and Counter Terrorism, 7(1),
80–91. https://doi.org/10.1080/18335330.2012.653198
Computer History Museum. (2015). Oral History of Bjarne Stroustrup.
Cortada, J. W. (2008). Patterns and Practices in How Information Technology Spread
around the World. IEEE Annals of the History of Computing, 1(4), 4–25.
Cortada, J. W. (2014). When Knowledge Transfer Goes Global: How People and
Organizations Learned About Information Technology, 1945–1970. Enterprise &
Society, 15(1), 68–102. https://doi.org/10.1093/es/kht095
Cox, G. (2013). Speaking code: Coding as aesthetic and political expression / text: Geoff
Cox ; code: Alex McLean ;, foreword by Franco “Bifo” Berardi. Cambridge, Mass:
The MIT Press.
CPI Inflation Calculator. (n.d.). Retrieved August 17, 2019, from https://data.bls.gov/cgi-
bin/cpicalc.pl
45
Dasgupta, S., & Hill, B. M. (2017). Learning to Code in Localized Programming
Languages. Proceedings of the Fourth (2017) ACM Conference on Learning @
Scale, 33–39. https://doi.org/10.1145/3051457.3051464
Du Boulay, B. (1986). Some Difficulties of Learning to Program. Journal of Educational
Computing Research, 2(1), 57–73. https://doi.org/10.2190/3LFX-9RRF-67T8-
UVK9
Durnová, H., & Alberts, G. (2014). Was Algol 60 the First Algorithmic Language? IEEE
Annals of the History of Computing, 36(4), 104–104.
https://doi.org/10.1109/MAHC.2014.63
Durnová, Helena. (2014). Embracing the Algol Effort in Czechoslovakia. IEEE Annals of
the History of Computing, 36(4), 26–37. https://doi.org/10.1109/MAHC.2014.51
Eastman, C. m. (1982). A Comment on English Neologisms and Programming
Language Keywords. Communications of the ACM, 25(12), 938–940.
https://doi.org/10.1145/358728.358756
Edwards, P. N. (1996). The closed world: Computers and the politics of discourse in
Cold War America / Paul N. Edwards. Cambridge, Mass: MIT Press.
Ensmenger, N. L. (2010). The Computer Boys Take Over: Computers, Programmers,
and the Politics of Technical Expertise. The MIT Press.
Fanon, F., & Lam Markmann, C. (1986). Black skin, white masks. London: Pluto Press.
Feenberg, A. (1995). Subversive Rationalization: Technology, Power, and Democracy.
In A. Feenberg & H. Alastair (Eds.), Technology and the Politics of Knowledge
(pp. 3–20). Indianapolis: Indiana University Press.
Ferguson, G. (2012). English in language policy and management. In B. Spolsky (Ed.),
The Cambridge Handbook of Language Policy (pp. 475–498).
https://doi.org/10.1017/CBO9780511979026.029
46
Finley, K. (2013, December 9). Obama Says Everyone Should Learn How to Hack.
Wired. Retrieved from https://www.wired.com/2013/12/obama-code/
Finn, E. (2017). What Algorithms Want: Imagination in the Age of Computation.
Cambridge, Massachusetts: The MIT Press.
Fisk, D. (2012). Cyber security, building automation, and the intelligent building.
Intelligent Buildings International, 4(3), 169–181.
https://doi.org/10.1080/17508975.2012.695277
Gallagher, B. (2018). How to Turn Down a Billion Dollars: The Snapchat Story. St.
Martin’s Press.
Galloway, A. R. (2004). Protocol:How Control Exists after Decentralization. Cambridge,
Massachusetts: The MIT Press.
Galloway, A. R. (2006). Protocol vs Institutionalization. In New Media, Old Media: A
History and Theory Reader (1st ed., pp. 187–198). New York: Routledge.
Gobbo, F., & Durnová, H. (2014). From Universal to Programming Languages. Retrieved
from https://dare.uva.nl/search?identifier=02cf35bc-67e8-4bc7-8ee2-
069ecfeac5f9
Google. (n.d.). Retrieved August 14, 2019, from https://www.google.com/
Gordin, M. D. (2015). Scientific Babel: How Science Was Done Before and After Global
English. University of Chicago Press.
Gordon, D. C. (1978). The French Language and National Identity (1930-1975). Mouton.
Guo, P. J. (2018). Non-Native English Speakers Learning Computer Programming:
Barriers, Desires, and Design Opportunities. Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems, 396:1–396:14.
https://doi.org/10.1145/3173574.3173970
Helleiner, E. (2017). The Macro Social Meaning of Money: From Territorial Currencies to
Global Money. In N. Bandelj, F. F. Wherry, & V. A. Zelizer (Eds.), Money Talks:
47
Explaining How Money Really Works (pp. 145–157). Retrieved from
http://muse.jhu.edu/book/52241/
Keating, G. (2012). Netflixed: The Epic Battle for America’s Eyeballs. Penguin.
Keizer, G. (2010, September 16). Is Stuxnet the “best” malware ever? Retrieved August
30, 2019, from InfoWorld website: https://www.infoworld.com/article/2626009/is-
stuxnet-the--best--malware-ever-.html
Krige, J. (2006). American hegemony and the postwar reconstruction of science in
Europe. Cambridge, Massachusetts: MIT Press.
Lewis, N. (2016). Peering through the Curtain: Soviet Computing through the Eyes of
Western Experts. IEEE Annals of the History of Computing, 38(1), 34–47.
https://doi.org/10.1109/MAHC.2015.48
Liblit, B., Begel, A., & Sweetser, E. (2006). Cognitive Perspectives on the Role of
Naming in Computer Programs. PPIG.
Liu, H. Y. (2018). Modern Chinese national identity and transportation (Extended Essay).
Simon Fraser University.
Lyon, D. (2015). The Snowden Stakes: Challenges for Understanding Surveillance
Today. Surveillance & Society, 13(2), 139–152.
https://doi.org/10.24908/ss.v13i2.5363
Manovich, L. (2001). The Language of Cultural Interfaces. In The Language of New
Media (pp. 80–102). Cambridge, Massachusetts: The MIT Press.
McPherson, T. (2012). U.S. Operating Systems at Mid-Century: The Intertwining of Race
and UNIX. In L. Nakamura & P. A. Chow-White (Eds.), Race After the Internet
(pp. 21–37). New York: Routledge.
Mhashi, M. M. (2013). Difficulties Facing Students in Learning Computer Programming
Skills at.
48
Mimiko, N. O. (2012). Globalization: The Politics of Global Economic Relations and
International Business. Carolina Academic Press.
Muir, E. (1997). Ritual in Early Modern Europe. Cambridge University Press.
Nakamura, L. (2014). Indigenous Circuits: Navajo Women and the Racialization of Early
Electronic Manufacture. American Quarterly, 66(4), 919–941.
https://doi.org/10.1353/aq.2014.0070
Nick Srnicek. (2017). Platform capitalism. Cambridge, UK ; Malden, MA: Polity.
Nofre, D. (2010). Unraveling Algol: US, Europe, and the Creation of a Programming
Language. IEEE Annals of the History of Computing, 32(2), 58–68.
https://doi.org/10.1109/MAHC.2010.4
Nofre, David. (2014). Managing the Technological Edge: The UNESCO International
Computation Centre and the Limits to the Transfer of Computer Technology,
1946-61. Annals of Science, 71, 410–431.
https://doi.org/10.1080/00033790.2013.827075
Nofre, David, Priestley, M., & Alberts, G. (2014). When Technology Became Language:
The Origins of the Linguistic Conception of Computer Programming, 1950–1960.
Technology and Culture, 55(1), 40–75. https://doi.org/10.1353/tech.2014.0031
O’Regan, G. (2012). A Brief History of Computing (2nd ed.). London: Springer.
Paju, P. (2008). National Projects and International Users: Finland and Early European
Computerization. IEEE Annals of the History of Computing, 1(4), 77–91.
Paju, P., & Durnová, H. (2009). Computing Close to the Iron Curtain: Inter/national
Computing Practices in Czechoslovakia and Finland, 1945-1970. Comparative
Technology Transfer and Society, 7(3), 303–322.
Paju, P., & Haigh, T. (2016). IBM Rebuilds Europe: The Curious Case of the
Transnational Typewriter. Enterprise & Society, 17(2), 265–300.
https://doi.org/10.1017/eso.2015.64
49
Paju, P., & Haigh, T. (2018). IBM’s Tiny Peripheral: Finland and the Tensions of
Transnationality. Business History Review, 92(1), 3–28.
https://doi.org/10.1017/S0007680518000028
Parikka, J. (2016). A Brief Media Archaeology of Software and Artificial Life. In W. H. K.
Chun, A. W. Fisher, & P. A. Chow-White (Eds.), New Media Old Media: A History
and Theory Reader (2nd ed., pp. 275–285). New York: Routledge.
Parker, K. R., & Davey, B. (2012). The History of Computer Language Selection. In A.
Tatnall (Ed.), Reflections on the History of Computing: Preserving Memories and
Sharing Stories (pp. 166–179). https://doi.org/10.1007/978-3-642-33899-1_12
Paulston, C. B., & Watt, J. M. (2012). Language policy and religion. In B. Spolsky (Ed.),
The Cambridge Handbook of Language Policy (pp. 335–350).
https://doi.org/10.1017/CBO9780511979026.021
Perlis, A. J., & Samelson, K. (1959). Report on the Algorithmic Language ALGOL the
ACM committee on programming languages and the GAMM committee on
programming. Numerische Mathematik, 1(1), 41–60.
https://doi.org/10.1007/BF01386372
Perlis, Alan J. (1978). The American side of the development of Algol. ACM SIGPLAN
Notices, 13(8), 3–14. https://doi.org/10.1145/960118.808369
Peters, B. (2016). How not to network a nation: The uneasy history of the Soviet internet.
Cambridge, Massachusetts: MIT Press.
Phillipson, R. (2009). Linguistic imperialism continued. Retrieved from
https://ebookcentral.proquest.com/lib/sfu-ebooks/detail.action?docID=614988
Phillipson, R. (2012). Imperialism and colonialism. In B. Spolsky (Ed.), The Cambridge
Handbook of Language Policy (pp. 203–225).
https://doi.org/10.1017/CBO9780511979026.013
50
Prieto-Ñañez, F. (2016). Postcolonial Histories of Computing. IEEE Annals of the History
of Computing, 38(2), 2–4. https://doi.org/10.1109/MAHC.2016.21
Qian, Y., & Lehman, J. D. (2016). Correlates of Success in Introductory Programming: A
Study with Middle School Students. Journal of Education and Learning, 5(2), 73–
83.
Raley, R. (2003). Machine Translation and Global English. The Yale Journal of Criticism,
16(2), 291–313. https://doi.org/10.1353/yale.2003.0022
Ritchie, D. M. (1993). The development of the C language. ACM SIGPLAN Notices,
28(3), 201–208. https://doi.org/10.1145/155360.155580
Rosen, S. (1964, April 21). Programming systems and languages: A historical survey. 1–
15. https://doi.org/10.1145/1464122.1464124
Rossum, G. van. (1999). Programming for Everybody ( Revised Proposal ) A Scouting
Expedition for the Programmers of Tomorrow Corporation for National Research
Initiatives July 1999 CNRI Proposal # 90120-1 a PI :
Rossum, G. van. (n.d.). Computer Programming for Everybody. Retrieved August 7,
2019, from Python.org website: https://www.python.org/doc/essays/cp4e/
Sammet, J. (1972). Programming languages: History and future. Communications of the
ACM, 15(7), 601–610. https://doi.org/10.1145/361454.361485
Sammet, J. E. (1966). The Use of English As a Programming Language. Commun.
ACM, 9(3), 228–230. https://doi.org/10.1145/365230.365274
Sammet, J. E. (1969). Programming languages: History and fundamentals. Englewood
Cliffs, NJ: Prentice-Hall.
Schiller, D. (2008). The Militarization of U.S. Communications. Communication, Culture
& Critique, 1(1), 126–138. https://doi.org/10.1111/j.1753-9137.2007.00013.x
Scott, J. C. (2009). The art of not being governed: An anarchist history of upland
Southeast Asia / James C. Scott. New Haven: Yale University Press.
51
Scott, M. L. (2009). Programming language pragmatics (3rd ed.). Retrieved from
http://www.sciencedirect.com/science/book/9780123745149
Spivak, G. C. (n.d.). Can the Subaltern Speak? In P. Williams & L. Chrisman (Eds.),
Colonial Discourse and Postcolonial Theory: A Reader (p. 24). New York:
Columbia University Press.
Sterne, J. (2013). Format Theory. In MP3: The Meaning of a Format (Sign, Storage,
Transmission) (pp. 1–31). Durham, N.C.: Duke University Press.
Stone, B. (2017). The Upstarts: How Uber, Airbnb, and the Killer Companies of the New
Silicon Valley Are Changing the World. Boston, MA, USA: Atlantic/Little, Brown.
Stroustrup, B. (1993). A history of C++: 1979--1991. ACM SIGPLAN Notices, 28(3),
271–297. https://doi.org/10.1145/155360.155375
Stroustrup, B. (1994). The design and evolution of C++. Reading, Mass: Addison-
Wesley.
Stuckey, J. E. (1991). The violence of literacy. Retrieved from
http://archive.org/details/violenceoflitera00stuc
Tatarchenko, K. (2010). Cold War Origins of the International Federation for Information
Processing. IEEE Annals of the History of Computing, 32(2), 46–57.
Tatarchenko, K. (2011). “Lions – Marchuk”: The Soviet-French Cooperation in
Computing. In J. Impagliazzo & E. Proydakov (Eds.), Perspectives on Soviet and
Russian Computing (pp. 235–242). Springer Berlin Heidelberg.
Turner, F. (2006). Networking the New Economy. In From Counterculture to
Cyberculture: Stewart Brand, The Whole Earth Network, and the Rise of Digital
Utopianism (pp. 173–206). Chicago: The University of Chicago Press.
Tworek, H. J. S. (2016). How not to build a world wireless network: German–British
rivalry and visions of global communications in the early twentieth century.
52
History and Technology, 32(2), 178–200.
https://doi.org/10.1080/07341512.2016.1217599
Vee, A. (2017). Coding literacy: How computer programming is changing writing.
Wallerstein, I. M. (2004). World-systems analysis: An introduction / Immanuel
Wallerstein. Durham: Duke University Press.
Wexelblat, R. L. (Ed.). (1981). History of Programming Languages. New York: Academic
Press.
Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136.
Wright, S. (2012). Language policy, the nation and nationalism. In B. Spolsky (Ed.), The
Cambridge Handbook of Language Policy (pp. 59–78).
https://doi.org/10.1017/CBO9780511979026.006