To mark the launch of our new Philosophy digital catalogue, Tomas Elliott looks into the early history of computing through the work of philosopher Gottfried Wilhelm Leibniz.
In 1948, the American mathematician Norbert Wiener identified an unlikely source for the computerized codebreaking that had hastened the end of World War II: the 17th-century German philosopher, Gottfried Wilhelm Leibniz. “The history of the modern computing machine,” Wiener claimed, “goes back to Leibniz and Pascal. Indeed, the general idea of a computing machine is nothing but a mechanization of Leibniz’s calculus ratiocinator.”
According to Wiener, Leibniz was “the patron saint of cybernetics,” Wiener’s theory of information being named after the Ancient Greek term “kubernetes” (meaning the captain or helmsman of a ship) Cybernetics attempted to account for how various systems of organization and governance—from the tiniest chemical reactions in cells to the processes in modern computing—could be understood in terms of the encoding, transmission, and decoding of information. It was, in other words, the first ever theory of cyber culture in the rapidly developing age of the machine. For Wiener, that age did not begin with the Turing machine, invented in 1936, nor even with the difference engine, developed by Charles Babbage and Ada Lovelace in the early 19th century. It began, instead, with the “calculating machine” of Gottfried Leibniz.
Leibniz anticipated modern computing in two significant ways. The first of these was intellectual. Leibniz’s philosophical system took as one of its foundational premises the idea that the world and all its interlocking systems can be understood rationally through an appeal to universal logic and mathematical notation. In the same way that our modern computers approach the most complex tasks by reducing them to a logical code of ones and zeroes, so too did Leibniz believe that all the infinitesimal beauty of the world could be explained through a symbolic logic and, ultimately, a binary system of computation that he himself helped to develop.
In Leibniz’s thought, this idea of a universe underwritten by logic is crystallized in his “principle of sufficient reason,” which formed one of the “two great principles” of the Monadology, the crowning achievement of his later philosophy. This principle stated that “nothing happens without a reason” (or that “every effect has a cause”). If this is the case, then every effect can ultimately be described by a logical system (or, in Leibniz’s terms, “a universal language”), an idea central to modern computing, where lines of code translate (and are translated into) complex qualitative phenomena.
This brings us to the second way in which Leibniz anticipated computing. He believed that anything that can be computed by machines should be. Leibniz was well aware that, given the endless complexity of the universe, “most of the time, reasons cannot be known to us.” But he felt that, if machines could take on some of the labour of thought, then humans would be freer to tackle the world’s more complex problems. That idea still underlies computer-based research today.
Accordingly, Leibniz set out to develop the first machine that could perform all four operations of arithmetic: addition, subtraction, multiplication, and division. In its finished design, his “calculating machine” could process sums with figures of up to sixteen digits. While it had some flaws (and its computational power was nothing compared to today’s digital calculators), it represented a revolution in the arithmetic of the day; it was a truly modern piece of computational hardware.
Leibniz presented the first prototype of the machine to the Royal Society in London in February 1673. This was a fateful meeting, and Leibniz’s relationship with the Society would go on to colour much of his later work. Most notably, in 1699, Leibniz was accused by members of the Society of having plagiarized his calculus from Isaac Newton, a claim that also threw doubt on the originality of his technological inventions. Nowadays, most historians agree that both Leibniz and Newton developed their calculus independently, but the affair is remembered for the intellectual exchange that arose from it: a correspondence between Leibniz and Samuel Clarke, which lasted from 1715 until Leibniz’s death the following year. That exchange saw Leibniz defend his views against the Newtonian Clarke, who later published their correspondence in English in 1717.
Much of the debate contained within their letters seems technical and obscure to us today. It focused primarily on the difference between Newton’s absolutist conception of space and Leibniz’s relativistic model. There was a lot at stake in that distinction, however, including not just physics but the makeup of the human soul and the nature of God Himself.
In his Principia Mathematica, Newton had claimed that “absolute space… remains always similar and immovable.” Leibniz stated, however, that if this were the case, there would be no reason “why everything was not placed the quite contrary way, for instance, by changing east into west.” In other words, space itself (and God’s design of it) would be arbitrary. But an arbitrary universe would have violated Leibniz’s principle, mentioned above, that “nothing happens without a reason.” That reason, in fact, was the most famous in Leibniz’s philosophy: God arranged space in the best way possible. In other words, He designed the “best of all possible worlds.”
Nowadays this model of the universe is best remembered for the biting critique that it suffered at the hands of the later French philosopher, Voltaire. In his philosophical novel Candide, Voltaire’s endlessly optimistic philosopher, Pangloss, justifies all of the world’s suffering through the claim that “everything necessarily serves the best end.” Meanwhile, Candide’s experiences of war, famine, disease, and an array of natural disasters cause him to ask: “if this is the best of possible worlds, what then are the others?”
It should be noted, however, that Leibniz’s optimism stemmed ultimately from his belief in the rational logic of nature’s laws. He believed in a Godly universe that was maximally efficient and minimally wasteful. He also believed, therefore, that the universe’s problems could be solved, provided we have the means to solve them. That rationalism continues to inform contemporary computing, where the world’s problems hinge on the development of ever more effective algorithms.
Wiener, for his part, was far more sceptical than Leibniz about the inherent goodness of a rational, calculable, and mechanized universe. Of course, the world of 1948 was very different from the world of 1673. Leibniz’s God had long since departed, abandoning humanity to the destruction of the atomic bomb and the guided missile, two other technological “advances” ushered in by the age of information. Fittingly, Wiener noted at this time that the harnessing of computers and machines had “unbounded possibilities for good and evil.” A Leibnizian optimism still lingered, therefore, but one tempered with post-war caution. Now, seventy years after Cybernetics and four centuries after the Monadology, we’re still waiting to see whether the new era of machine learning, big data, and artificial intelligence—all of which share in Leibniz’s legacy—will open out onto the best of all possible worlds that the German philosopher once envisioned.
Our new digital list, Philosophy, features some of the most influential and controversial works in the development of human thought, from the ancient to the modern age.
The post Gottfried Wilhelm Leibniz: The best of all possible computers appeared first on Peter Harrington Blog.