A Brief History of Logic And Computing

13 min read

Behind it all is surely an idea so simple, so beautiful, that when we grasp it– in a decade, a century, or a millennium– we will all say to each other, how could it have been otherwise?

John Archibald Wheeler

Human Knowledge

Knowledge is a snapshot of reality in a format interpretable by us. Knowledge exists in the form of memories, written text, or any form of recorded media. But it gains meaning only through an interpretation. The knowledge encoded by DNA molecules to build bodies makes any sense only because our cells have a built-in mechanism to transcribe the codes. Evolution provided us with preliminary knowledge that helped us survive in the African Savanna. But we are far a reaching civilization. We invaded the previously uninhabitable deserts of the Sahara and the frozen icy plains of Antarctica because we have the ability to use knowledge to enhance our chances of survival. But unapplied knowledge is no knowledge at all. And limitations in our biology prevent us from reaching beyond our means. We needed to build tools.

Our brains have a natural curiosity to understand, interpret, and uncover inconspicuous patterns in reality. Although we exhibit random behavior most of the time, sometimes we do appear to behave rationally. We often strive to reason out the cause of observable events around us. But how do we comprehend the channels of rational thought itself? For thousands of years, we have been offloading work to machines that alleviate some of our physical efforts. But how did we end up building machines that do some of our thinking for us?

Modern computers are the culmination of centuries of attempts to formalize rational thought. They are tools of knowledge.


We had known for a long time that it is possible to mechanize calculation. Blaise Pascal built the first mechanical computer that performed basic arithmetic. We have come so far on that journey that a single smartphone of today packs up to 400 billion transistors. But the history of computers has not always been about transistors and mechanical adders. Rather, it must be seen as a history of revolutionary ideas in philosophy and mathematics – the then esoteric field of mathematical logic. It would end up being the most impactful scholarship of the modern world than any other.

Computer science advanced from mathematical logic in the 1930s with the publication of two historic papers: Claude Shannon’s A Symbolic Analysis of Switching and Relay Circuits and Alan Turing’s On Computable Numbers, With an Application to the Entscheidungsproblem.

Shannon was an electrical engineering student at MIT. The primary reference of his thesis was a 90-year old paper by the British philosopher and logician George Boole – a pioneer of mathematical logic. Turing was trying to solve David Hilbert’s decision problem. As a by-product, he showed how general-purpose computers can be designed to process mathematical logic. The history of computers is deeply rooted in the history of logic systems.


Early Logic

“Since Aristotle, logic had been unable to take a single step forward, and therefore seems to all appearance to be finished and complete.” –

Immanuel Kant

Aristotle presented his logic system in his six-part book – The Organon. He devised a set of axioms from which he derived the rest of his logic system.

  • Law of Identity: An object is what it is
  • Law of Non-Contradiction: A statement can’t be both true and false at the same time
  • Law of the Excluded Middle: A statement has to be either true or false

These axioms were not useful in understanding how humans think. However, Aristotle argued that this is how a rational person ought to think.

A central theme of Aristotle’s ‘The Organon’ was that of syllogism – a form of deductive reasoning that uses two propositions to derive a conclusion. He posited that an argument’s validity depends on its logical structure than the objects and predicates used. For example:

  1. All trees have roots
  2. The banyan is a tree
  3. Therefore, banyans have roots

You can replace ‘the banyan’ with any other tree and ‘roots’ with any other predicate and the argument will remain valid. The logical words ‘All’, ‘is’, ‘have’, and ‘therefore’ preserve the logical structure.

Following Aristotle’s first successful attempt at identifying logical connectives in natural languages, Euclid published the most influential mathematical textbook ever written – Elements. Its logical rigor remained unsurpassable till the 19th century. Its influence was so significant over the centuries that it is second only to the Bible in the number of editions printed.

The first printed edition of Euclid’s Elements in both Greek and Latin was published in 1558

Although geometry originated independently in India around the same time as Euclid in the 3rd century BC, Euclid’s Elements is famous for putting geometric ideas into an axiomatic form.

The Greeks were the first mathematicians who are still ‘real’ to us to-day. Oriental mathematics may be an interesting curiosity, but Greek mathematics is the real thing.

G.H. Hardy

Geometry continued to be practiced in the diagrammatic way demonstrated in the Elements till the 17th century when Rene Descartes gave geometry an analytic footing by showing that geometric dimensions can be represented as variables and geometric operations can be replaced by formulas. In his mathematical text Discourse on Method, he popularized the algebraic notation of using x, y, z in place of variables. This eliminated the need for spatial intuitions and allowed mathematicians to operate under well defined formal rules. This shifted the presiding trend in mathematics from diagrams to formulas.

An important outcome of Descartes’ introduction of the algebraic notation is that just 30 years after its invention, Isaac Newton and Gottfried Leibniz went on to independently invent calculus. This is primarily attributed to the invention of Descartes’ Cartesian coordinate system.

Inspired by this paradigm shift, Boole set out to do the same for Aristotelean logic. He freed philosophical logic from human intuitions by giving it an algebraic notation. In 1854, he published his book The Laws of Thought which gave birth to the field of mathematical logic. He begins his book with the goal to uncover the laws of operations of the human mind:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; … to collect some probable intimations concerning the nature and constitution of the human mind.

George Bool

Let Us Calculate

Almost 200 years before Boole, Gottfried Leibniz put forward an idea for a new language he called characteristica universalis. He was obsessed with his idea to create an alphabet for human thought, an ideographic language to represent all knowledge in math and science. Each character in the language would stand for atomic concepts in math and science and would resemble Egyptian hieroglyphics. It shall be used as an ‘instrument’ that could enhance human thought. He imagined a machine that could use his symbolic language to perform logical computations.

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.

Gottfried Leibniz

Leibniz’s dream was first given shape in 1879 by the German philosopher Gottlob Frege with the publication of his paper Begriffsschrift, which translates to “Concept Writing” and subtitled “A Formula Language Modelled on That Of Arithmetic, Of Pure Thought”. Over the next 25 years, Frege would apply his logical calculus in formalizing the foundations of mathematics.

He builds on top of Leibniz’s Calculus of Propositions to derive ‘first-order logic’ that is widely used today in math, philosophy, linguistics, and computer science. First-order logic uses quantified logical variables instead of non-logical objects like names in sentences. He was also the first to separate objects from predicates in mathematical expressions (for example, “there exists an x such that y”). His daring innovation was that his logic system resembled the structure of ordinary language.

Most importantly, Frege’s work The Foundations of Arithmetic was responsible for catalysing the Linguistic Turn in Philosophy. Just as Enlightenment philosophers were obsessed with questions of knowledge, philosophy after Frege was obsessed with questions of language. His ‘concept script’ was very similar to today’s programming languages – it used meaningless symbols that operate under a specific set of rules. The concept script gains meaning only by an interpretation guide that comes with the language. This is the origin of syntax and semantics in the programming languages of today. He also pioneered some of the most important computer science concepts like recursive functions, variables, scopes and bindings.

A scintillating consequence of Frege’s work was the exposure of weaknesses in the foundations of mathematics. Euclid’s elements, considered the Bible of mathematics, was found to be full of logical errors. This fuelled an intense period of mathematical innovation. In the late 1800s, Giuseppe Peano and David Hilbert developed axioms for arithmetic and geometry respectively. Hilbert also outlined a framework to formalize the rest of mathematics. He posited that formal systems should satisfy the requirements of Completeness (proof of provability of true statements) and Decidability (an algorithm to decide true/false statements).


A Fatal Blow

This started an arms race in mathematics to build better logic systems that satisfied Hilbert’s program. One logician’s work would be torn down by another once its inconsistencies were discovered. Paradoxical statements were a favored tool in deconstructing logical fallacies in some of these formal systems. A famous example was Bertrand Russell’s paradox:

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.

Bertrand Russell

Russell’s paradox was seen as a serious flaw in Frege’s logic. Later Frege would comment that this came as his greatest surprise.

Between 1910 and 1913, Bertrand Russell and Alfred North Whitehead would publish three volumes of Principia Mathematica – The Principles of Mathematics. Their methods were so careful and detailed that they took 300 pages to get the proof that 1+1=2.

To solve Russell’s paradox, they introduced type theory. They devised a hierarchy of types in formal languages wherein all statements of a certain type can refer to types below but not to themselves or higher types. This entirely eliminated self-referential paradoxes in formal systems. Although this seemed to provide the illusion of completeness in mathematics, it wouldn’t last long.

Hilbert’s dream of mathematical completeness would be shattered in 1931 by Kurt Godel when he published his famous Incompleteness Theorem. His theory would have reverberating consequences in the way mathematical philosophy would be perceived since then. Godel proved that any axiomatic formal system that is able to carry out basic arithmetic is inconsistent because even true statements in that formal system cannot be proved or disproved.

A fatal blow came in 1936 when Turing published his paper On Computable Numbers, With an Application to the Entscheidungsproblem to prove that there could exist no algorithm which could decide in finite time whether an arbitrary mathematical statement was true or false. Turing’s genius lies in the fact that, to solve decidability, he created a mathematical model of a computing machine – Turing Machine.


Baking Logic In Matter

Claude Shannon made the first attempt to ‘etch’ mathematical logic in electric circuits. He used the principles in Boole’s The Laws of Thought to construct a correspondence between propositional logic and relay circuits. This allowed computer scientists to directly import decades of work in mathematical logic. In his paper, A Symbolic Analysis of Switching and Relay Circuits, he shows how boolean logic can be used to create circuits that performed binary addition. Many such adder circuits could be combined to perform basic arithmetic, logical, and shift operations. This formed the basis of arithmetic logic units in modern computers.

Shannon was also the first to make a distinction between the physical and logical layers of computers. This is common sense in the 21st century but at his time, it was a key philosophical transcendence. Since Shannon, vast swaths of progress has been made in the physical layer of computers. In 1947, William Shockley invented transistors at Bell Labs. They stood as a radical improvement over Shannon’s simple relay circuits and became the best-known method to physically encode boolean logic.

While Shannon was busy showing the world how to mimic logical thought on the electric circuit, Alan Turing showed how to design computers after mathematical logic. His pioneering work on computer design lead the invention of general-purpose computers (Universal Turing Machines) of today.

Turing’s paper … contains, in essence, the invention of the modern computer and some of the programming techniques that accompanied it.

Marvin Minsky

Turing’s computer architecture was later called “Von Neumann Architecture” (afterward Von Newmann himself agreed that it came from Turing). In his architecture, the distinctness between machine, program, and data became an illusion. Machines made at Turing’s time had separate hardware like the computing machine, punched cards, and input codes. Turing theorized that any logic that could be represented in hardware could also be represented in software.

World War II brought out the best in Von Neumann and Turing. They independently wrote the EDVAC and ACE specifications respectively. These would presage some of the modern debates in computing. Turing’s ACE resembled the RISC (less HW instructions, more SW work) architecture and Von Newmann’s EDVAC resembled the CISC (more HW instructions, less SW work) architecture.

Von Neumann thought programming would be boring, monotonous work. In contrast, Turing thought that it “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”

The vast majority of computer programming since Turing has resembled the tradition of deductive logic i.e, using formal systems (computer languages) to specify a set of operating rules to manipulate symbols.

Human Knowledge is Infinite

Logic began as a way to understand the laws of thought. So far, we have managed to build computers that reason according to deductive reasoning. Nowadays, a new stream of applications involving a completely new system of logic is invading all aspects of ordinary life the same way personal computers have become commonplace in the past few decades.

In the 1940s Warren McCulloch and Walter Pitts wanted to derive a calculus for neurons that could be used to build computer circuits. It fell out of favor numerous times since then but has come back with a new rejuvenated strength to pay homage to the theme of inductive reasoning. Combined with statistical learning techniques, machine learning has already proven the world what it can do. From text synthesis to generating synthetic 2D worlds and redefining the very notion of what art could be, we have far surpassed the number of demonstrations required to know what this technology could hold for the future.

Before 1945, no human had ever witnessed a nuclear fission explosion. There may never have been one in the history of the universe. All of this follows along the theme that human knowledge is unbounded. For however long we may exist, we will continue to drive progress with reason and science.