Back to top

Digital rhetoric, literae humaniores and Leibniz’s dream

Occasional paper 1

In November 2017 Professor Willard McCarty (Professor Emeritus, Humanities Computing, King’s College London) delivered the first in a new series of Voltaire Foundation digital humanities lectures entitled ‘Digital rhetoric, literae humaniores and Leibniz’s dream’.

This lecture inaugurates a rather different focus, on digital computing in scholarship, and so immediately raises the question of what the one has to do with the other. Reason is the link: the preoccupation of the Century of Light and the purpose of the machine built for implementing ‘the capacity for rational thought’ (OED).[1] With 70 years of its history in the arts and in the natural and human sciences, identifying the digital machine in this way may seem new, even odd from a utilitarian or technical perspective. But given the ongoing, noisy confusion over its implications for how we live our professional and private lives, going back to its origins and into the processes of thinking it entails seems to me the best way of making sense of it as a work of human art for humane ends. Doing so raises in turn two practical questions: what fundamentals of reasoning do we need to consider as scholars when we set about to implement our specific projects? How might we expect its implementations to change what we do?

            A warning: this lecture is necessarily tentative and exploratory, both because digital computing in the scheme of things is young and because the machine was designed for exploration. In describing his own work in an interview, the eminent Japanese roboticist Masahiro Mori compared himself to the dog in the Japanese folktale Hanasaka Jiisan, [FIGURE 1] who runs out into an old couple’s garden sniffing the ground, then stops and barks insistently, ‘dig here!’, ‘dig here!’. The man digs and finds a hoard of gold coins. ‘I seem to have a good nose for sniffing out interesting things,’ Mori commented, ‘but I don’t have the skill to dig them up. That’s why I bark “Dig here!” and then other people will dig and find treasure.’[2] That’s what you can expect of me as well, to run about in our garden, stop, bark and hope you will dig.

1. To the deep end

I would guess that you will have heard of the literary historian Franco Moretti’s term ‘distant reading’, which refers to the use of computational methods to find patterns in large textual corpora. In the epigraph to his first discussion of it, Moretti quotes Aaron’s words from the libretto to Arnold Schoenberg’s unfinished and quite unorthodox opera Moses and Aaron: ‘My mission: to say it more simply than I understand it.’[3] Moretti strikes an implicit analogy: on the one hand two humanly comprehensible simplifications of the transcendent, Aaron’s Golden Calf and computational results; on the other hand, two rich complexities beyond comprehension: the idea of the divine and the whole of literature. Between them is the negotiator. Leibniz saw the relationship somewhat differently: not an external drama but the ongoing internal negotiation within a happy marriage joining theoreticians and empirics.[4] This relationship, between ‘the dirt and the word’ (as Emily Vermeule once said of archaeology and philology),[5] is where the promise of digital computing unfolds. My aim here is to question and explore that promise.

            To my mind the negotiating is central: it shifts attention from the fact that computing and the humanities have intersected to the actions we take in the intersection. Leibniz’s metaphor of negotiation, the felix conubio, is a provocative one, for in a relationship of that kind, each partner affects the other and is in turn affected; in a manner of speaking they co-evolve. So also the developmental spiral in the history of technology, from invention to assimilation to new invention. The traffic between scholarship and computing is usually conceived and followed as if it were a one-way street. That is wrong; we must change it.

            Perhaps it was Leibniz or perhaps Voltaire who was the last in the West to be capable of truly encyclopaedic knowledge. In that sense, at least, this lecture sits well under both their names. Undoubtedly the many specialisms into which the whole range of their concerns has branched, and branched again and again, are affected by the digital machine. But I want to argue for more than that: for the encyclopaedic relevance of digital computing and scholarship to each other – and particularly, because it is so undervalued by experts on both sides, computing’s need for the rich inheritance and ongoing work of our literae humaniores. I want to summon, to invite, to entice as many bearers of grain from as many academic silos as possible, including those silos that have been around for long before the digital machine was invented, so that our understanding of how to use it does not suffer from starvation of the mind and spirit. In a nutshell this understanding shows us that the machine is not just an obedient servant to our whims, thoughts and questions. By its resistance to us as well as by its enabling amplification of our imaginative capacities it has the potential of becoming a powerful companion with which to reason, to question and perhaps, someday, with whom to converse.

            Allow me to enlarge the point. If the computer were only a platform for useful applications – a knowledge jukebox, filing system, communications platform, digital typewriter and the like – then there would be much less for me to talk about. As ubiquitous and central to our work as these applications are, as much as they affect us, they are not my subject. Nor are the genuine accomplishments of digital scholarship, brilliantly summarised by Laurel Mandell in a blog-post, ‘Experiencing the bust’, responding to a polemical rant in the American Chronicle of higher education.[6] I won’t repeat her insightful defence or add to her list of worthy studies, which are indeed growing in number and easy to find – as long as you know where to look and can separate the wheat from the chaff.

            Later I will return to use of the machine for straightforward access to stuff. But throughout this lecture my emphasis will be on the enlightening cognitive resistance I just spoke of and on a kind of problematic liberation inherent to the machine from its beginnings. I want to persuade you that very quickly such enquiry leads to the deep end in which the older disciplines have from their beginnings been swimming and teaching others to swim.

2. Machines to think with

But since Professor Cronk invited me here, I must at least begin with a class of examples relevant to the Voltaire Foundation’s ambitious editorial project.

            In his essay ‘Editing as a theoretical pursuit’ Jerome McGann cites a number of remarkable scholarly editions emergent, as he says, ‘under a digital horizon [… prophesying] an electronic existence for themselves’. These codices, he continues, ‘comprise our age’s incunabula, books in winding sheets rather than swaddling clothes. At once very beautiful and very ugly, fascinating and tedious, these books drive the resources of the codex to its limits and beyond’.[7]

            Hence the urgent dream of the digital edition, with its many TEI-encoded approximations. I do not want to seem to belittle or arrogantly to correct the industry of scholarship producing these editions. Rather I want to talk about this industry’s ‘theoretical pursuit’ from a theoretical perspective relevant to all the disciplines of the arts and letters (for which I have co-opted the term literae humaniores). Again, my subject is the potential of the digital medium to respond to the emergent demands not only of the critical edition but also to all other scholarly forms of expression.

            The critic I. A. Richards, who helped establish English literary studies at Cambridge, began his book Principles of literary criticism with the assertion that ‘A book is a machine to think with.’[8] (Richards compared his book ‘to a loom’ – perhaps not coincidentally Jacquard’s machine, whose control mechanisms Charles Babbage adopted for his Analytical Engine in 1836.)[9] With the digital edition in mind, I want to put you in mind of how the design of any such ‘machine to think with’, from cuneiform tablets and the papyrus scrolls of Alexandria to the paperback, shapes the thinker’s cognitive paths. Consider these 6 examples: [FIGURE 2]

  1. A 9th-century glossed manuscript of Martianus Capella’s Late Antique work De nuptiis, On the marriage of philology and Mercury, in which the glosses weave together traditional authorities and commentary with the 5th-century text. Glosses here and elsewhere do not always clarify; sometimes they obscure by encryption, word-play, puzzles, allegories and etymologies, paradoxically revealing by concealing. The reader’s path is often, purposefully, an intricate maze.[10]
  2. The verbal concordance, invented at the monastery of St Jacques in Paris in the late 12th or early 13th century, here in an early 14th-century manuscript. This research tool, as Rouse and Rouse say,[11] directs the enquiring mind from a given word to all those passages of the Vulgate where it is attested; reading is to some degree randomised as well as directed.
  3. A 20th-century but solidly traditional English Bible, whose central column links passages, both within each Testament and between them, not by the words they use but according to the typological structure they illumine, under the principle articulated by St Augustine for the Christian Bible: ‘in the Old the New is concealed, and in the New the Old is revealed’.[12]
  4. A contemporary, computer-generated keyword-in-context concordance, whose format forestalls reading, redirecting attention from the centred word to its nearest neighbours left and right. It is the basis for the field of Corpus Linguistics, according to linguist J. R. Firth’s dictum, ‘You shall know a word by the company it keeps!’[13]
  5. Ramón Llull’s 14th-century paper machine, designed to engage the reader combinatorially in correlating the terms of a ‘spiritual logic’.[14]
  6. One of Herman Goldstine’s and John von Neumann’s diagrams to illustrate the logical flow of automated reasoning during operation of a stored-program digital computer. More about that later.

My point is phenomenological and applies to the whole history of verbal communication: that the medium mediates; that tools shape the thoughts and actions of the person using them, and themselves embody what philosopher Davis Baird has called ‘thing knowledge’.[15] The question for us is how the thing-knowledge of the digital machine does it differently than in other media.

3. Modelling

The best way to prise open my topic is with the concept and practice of modelling, that is, the iterative manipulation of a software model. This is where computing as we know it begins, with a digital representation of whatever is to be manipulated.[16]

            There are two kinds of model: a model of something in order to develop one’s understanding of it, and a model for something imagined or known only from surviving evidence.[17] Either kind is a necessarily simplified construct according to the machine’s rigorous constraints of discrete, all-or-nothing binary logic – ones and zeroes, as we say. What’s important is that the model renders the real or imagined object of interest computationally tractable, that is, it represents that object in a completely explicit and absolutely consistent manner. (These startlingly harsh axioms of digitisation, as I call them, have crucial implications I will return to in a moment.) Modelling is thus not strictly mimetic of an object in the real world nor an imagined one because both require translation into binary terms and choice of what is to be included – hence rigorous compromise. In return for such sacrifices of truth, the modeller receives enormous combinatorial, manipulatory power over the modelled object or idea. Thus the essential tension of the digital tradeoff, between mimetic fidelity and computational manipulability.

4. Modelling of

Let’s consider the first, analytic kind, modelling of something, more closely. It is illustrated here [FIGURE 3]. I will return to the second, synthetic kind later. For modelling-of note in the figure the progressive circularity of experiencing, simplifying, building, manipulating, comparing, with improved understanding of the modelled object and of the resultant process. But consider what happens during that translation. In the figure, note what the modeller has to work with, what he is constrained to leave out, how much is imagined.

            Let me give you three illustrative examples, in the form of thought-experiments, one art-historical, two literary.

 

a.  Icarus [FIGURE 4]

Consider these 8 paintings on the story of Icarus from the 16th to the 21st century – the titles tell us they’re about Icarus, and in some cases more than others, so do visual clues. From a strictly computational point of view, the problem is deriving a rigorous set of criteria by which an algorithm could discover all of these paintings without the help of their titles, or if it had those titles, how it could qualify their variations on the literary theme (from Ovid’s Metamorphoses). As Marina Warner wrote recently, when the facts are missing we tell stories: ‘The speculative mind generates experience – imagined experience.’[18] The machine has only the data to go on. The data can, of course, include our choices, and if there are enough of us, produce fairly reliable results. But I am excluding that possibility for purposes of argument.

b.  The identity of the narrator of the last chapter of Barbara Kingsolver’s novel, The Poisonwood Bible [FIGURE 5]

Here we have a novel about an American family taken to what was then the Belgian Congo by the evangelical father, a fiery Baptist. Each chapter is narrated by a member of the family (excluding the father); the narrator’s name is the title of the chapter – except for the last chapter, which has no title. We figure out by complex inference that the narrator is the dead child Ruth May, or rather is the being to whom she reverted when she died, the muntu, who is nameless and all-seeing, encompassing all people born, unborn and dead, as Adah explains at the beginning of the chapter on the left (p.238). The problem is how to follow that inference algorithmically, not to the name ‘Ruth May’ but to the muntu whom Ruth May once instantiated.

c.  Allusion in Seamus Heaney, ‘The Railway children’ [FIGURE 6]

The problem here is what to do about the last 6 words, ‘through the eye of a needle’, which allude to the parable of the rich man in the Synoptic Gospels. Consider what happens poetically when an allusion is recognised but not explicitly labelled, then what happens when it is.

 

In all three cases we are left with a return to the drawing board and a number of questions. These spur us to ask first how we know what we know, then to talk to a programmer (or, better, to ourselves as programmer) about how a better response might be implemented.

            But let me return to the crucial implications of the requirement for computational tractability, the requirement that real-world objects be rendered in a completely explicit and absolutely consistent manner. Clearly, however much pain is entailed by fitting a poem or other work of art into such a Procrustean bed, we know how to do it. We know that the act of translation into discrete, all-or-nothing terms illumines by identifying the untranslatable. We know that once that translation is done, the machine has no problem crunching the bits. But how about the problem the interpreter has when he or she attempts to re-translate the results back into human terms? That, to my mind, is the nub of the matter, again, the negotiation.

            For ‘modelling of’ this is indeed the central and most difficult question. For help with it I turn to historian of science David Gooding’s work on the ways in which an experimenter construes new knowledge of the world from the behaviour of his experimental apparatus. [FIGURE 7] In this diagram, adapted from Gooding’s ‘Varying the cognitive span’,[19] note, again, the circularity of the process: from the modeller’s identification of the artefact, to the reductive translation from it into a model and software that is absolutely consistent (AC) and completely explicit (CE), then to the manipulation within the machine, then the modeller’s interpretative expansion of the results into a human context by means of what Gooding calls ‘construals’. The expansion is then compared to the artefact, the model adjusted and so on. Results that hold up become new knowledge of the modelled artefact.

            Gooding drew his theory of construal from a life-long study of the 19th-century scientist Michael Faraday’s meticulously detailed laboratory notebooks, attempting to recreate what Faraday did.[20] He focused specifically on that re-translation from the controlled environment of the experimental situation to the human world of the experimenter. This took him into the phenomenological, psychological, cognitive and sensuous processes of negotiation by which the experimenter draws on what he called construals – the ‘flexible, quasi-linguistic messengers between the perceptual and the conceptual[21] – to form into provisional scientific knowledge.

            I am strongly inclined to think that Gooding’s theory maps very well onto what happens when we engage in the modelling of a textual corpus, a painting, a musical idea and so forth, to make sense of them. But much more work is required of us: we need the equivalent of Faraday’s laboratory notebooks for research done with computers in the humanities; we need to marshal the work of many decades on human-computer interaction, especially in its attention to performance – ‘computing as theatre’, to adapt the title of Brenda Laurel’s book.[22] We need to think about what it means to treat computational work in the humanities analogously to experimental work in the laboratory sciences, and so pay particular attention to the cognitive psychological aspects of writings by Gooding, Ryan Tweeny, Nancy Nersessian and many others. Phenomenology should come into play. If we do all that, we will be far better equipped to design digital critical editions.

            Allow me to put to one side my ignorance and naivety as an amateur in these fields (literally, a lover of what they do but no specialist) to insist on one thing: that we never let slip from sight those axioms of digitisation, the absolute consistency and complete explicitness of digital representation. Some would say, and in fact do say, that the genius of the digital is that it renders the digital irrelevant, as it is when we listen to digitally recorded music. For research uses I think Aden Evens, in his recent book The Logic of the digital (London, 2015), is basically right, that the discrete, all-or-nothing quality of the digital runs all the way from hardware circuitry to user-interface and to the resources the scholar uses. I conclude that to ignore the digitality of the digital is to obscure the foil against which our reasoning struggles and is perhaps transformed when we do the kind of work I am describing.

            I have just suggested a very big question that I haven’t thought enough about to face in public directly, namely reasoning’s changes in prolonged exposure to the machine. But I will dare to guess that there are four aspects of computing’s influence to consider. One is defined by the foil provided by those two axioms. One has to do with the effects of googling on the conventions of normal academic discourse. (Richard Rorty, who writes well about this, has suggested a Gadamerian shift from metaphors of depth to metaphors of breadth, from probing one thing deeply to assembling and comparing many things.[23] But I say no more about that here.) The last two aspects, the combinatorial and simulative powers of the machine, take up the remainder of this lecture.

5. How it works

[FIGURE 8] I refer again to my adaptation of Gooding’s diagram with emphasis added to shift the discussion from the reductive translation of the artefact into a model and from the expansive translation via construals to the middle stage, where the bits are crunched. Here, perhaps especially, intellectual weed control (Geertz’s phrase) is needed.

            Many have said, following Lady Lovelace’s dictum, that the machine can only do what it is told to do,[24] or in the anxious language of the mid 20th century, the computer is but a ‘fast moron’. IBM made this slur doctrine in the 1950s to salve public fears of artificial intelligence; the mantra then went viral.[25] But such is not the machine we have, which in essential respects is the machine Herman Goldstine and John von Neumann addressed in the first-ever paper on programming.[26] They pointed out that the difference in design which makes the crucial difference is the provision that allows a running program, conditional on the outcome of previous operations, to deviate from the linear sequence of instructions or to rewrite those instructions on the fly.[27] Here again is Goldstine’s and von Neumann’s ‘flow diagram’, as they called it, illustrating the point. [FIGURE 9] They explained – note these words well – that coding ‘is not a static process of translation, but rather the technique of providing a dynamic background to control the automatic evolution of a meaning’ as the machine follows unspecified routes in unspecified ways in order to accomplish specified tasks.[28] Thus Herbert Simon: ‘This statement – that computers can only do what they are programmed to do – is intuitively obvious, indubitably true, and supports none of the implications that are commonly drawn from it.’[29] The idea of ‘machine’ behind it is, as Marvin Minsky remarked, ‘precomputational’.[30]

            The high level of complications which result from this design are ‘not hypothetical or exceptional […] they are indeed the norm’; the power of the machine ‘is essentially due to them, i.e. to the extensive combinatorial possibilities which they indicate’.[31] In essence, as von Neumann suggested four years later, machines ‘of the digital, all-or-nothing type’ work by combining and recombining the data under given constraints.[32] In a nutshell, then, the value-added at this stage is combinatorial.

            For an analogy take what happens in a research library, which provides a large number of modular resources in a standard format so that a variety of readers with unforeseen purposes may combine and recombine them ad lib. (We have had such a device at least since the Library of Ashurbanipal, in the 7th century BCE, if I am not mistaken.) On a larger scale, in more recent form, we see more or less the same with the Web, on a smaller scale with a single codex, particularly obvious when it is designed as a reference work, such as a critical edition built to foster recombinatorial liberties. (The works of Voltaire and Leibniz come to mind as fit corpora for such treatment.) But my point is the familiarity of this way of working, though now by the perhaps unfamiliar means of statistical tools for finding patterns in masses of data. Once again, distant reading is the most obvious example. Surprises from the process begin to emerge, Alan Turing suggested in 1950 (by analogy to the critical mass of a nuclear reaction) at a quantitative threshold of complexity.[33] In digital humanities, we find cogent surprises from stylometric analysis as well as from distant reading. See, for example, the work of John Burrows and the pamphlets of the Stanford Literary Lab.[34]

6. Modelling-for and simulation

The last aspect of reasoning with the machine that I will consider begins with synthetic modelling for. Modelling-for is commonplace in engineering as a way of converging on an optimal design, for example of an airplane wing. In historical or archaeological research, its aim is to create a simulacrum of a phenomenon that no longer exists, in whole or in part, from whatever evidence is at hand and whatever reliable conjectures may be possible. Examples are the Roman Forum and the Theatre of Pompey in Rome, of which little visible remains. These have been reconstructed visually by such modelling.[35] Modelling-for can also be used in the manner of a thought experiment to speculate about what might be or might have been.

            Modelling-for blurs into simulation when the model is, as it were, turned loose to see what can be learned from it, from interacting with it and changing its parameters. Simulation is used to study phenomena that are difficult or impossible to observe, or as the historian of the U.S. Manhattan Project wrote, ‘too far from the course of ordinary terrestrial experience to be grasped immediately or easily’.[36] The physicists of Los Alamos reached beyond their terrestrial experience by simulating the random behaviour of neutrons in a nuclear chain-reaction, using the so-called Monte Carlo technique.[37] Closer to our own interests (to cite two examples, one recent, the other old), linguists have used simulation to study the migration of dialects, naval historians to approximate the course of a battle at sea.[38]

            In the physical sciences, including climatology, simulation was just about coeval with the invention of digital computing machinery, indeed these sciences were a driving force in its earliest development.[39] Work in economics and other social sciences followed soon after. Most of us have forgotten that there is a rich and valuable history of simulation in the creative arts from the 1950s through the 1970s, of possible worlds and of works of art co-created in interaction with the viewer.[40] There is, in other words, a long and complex tradition of modelling of the more imaginative, less mimetic kind that offers us lessons in risk-taking and its rewards, from which we have much to learn.

            I want to conclude by giving you some idea of the limb out on which I now invite you to climb for the view of new things. The philosopher Paul Humphreys argues that in going out on that limb with computing beyond what we can do otherwise, scientific epistemology ceases to be human epistemology.[41] I think that David Gooding is right, that the humanness of our knowledge is a matter of what we do with what we learn in extenso, from what we could not otherwise reach.

            Before I conclude with an example illustrating what I mean by that, with keen awareness of how nervous we are apt to get about venturing out or up beyond ground we take to be solid, allow me to recommend two bracing touchstones: first, American essayist Elaine Scarry’s 1992 article, ‘The Made-up and the made-real’; second, Part B of Canadian philosopher Ian Hacking’s book, Representing and intervening (Cambridge, 1983). Experts in these areas will have more recent items to recommend, I trust, but these two (as well as several others I forbid myself from mentioning) mark important moments of light on my subject.

7. Extending ourselves

Elsewhere I have argued that it is far more useful and true to the practice of simulation – that is, to the use of the machine to reach for knowledge ‘too far from the course of ordinary terrestrial experience’ – to regard the machine not merely as a tool for pushing back ‘the fence of the law’, to adapt Jacob Bronowski’s characterisation of science,[42] nor merely as a tool for adventurous play, but as a prosthesis for the imagination that ranges along Elaine Scarry’s spectrum from ‘making-up to making-real’. I have suggested that at least from the perspective of the disciplines of making, simulation is the essential genius of the machine, and so what we must understand about it and learn to employ. I have suggested that simulation is a tool for restructuring our experience of the world, that as anthropologist Jadran Mimica says of the combinatorial ethnomathematics of the Iqwaye people, it is mythopoetic.[43] Other recent work in ethnomathematics and in the history of combinatorics suggests that cosmological world-making by combining and recombining units of experience, such as the digits we are born with, is as close to a universal language ‘after Babel’ as we are likely to get.[44] Leibniz’s calculemus! and our own combinatorics belong in this world-wide tradition.

            Fascinating (isn’t it?) that the device from which we have so often expected closure turns out to be so paradoxically a tool that does the opposite.

            Historian of biology Evelyn Fox Keller has cautioned us, noting that simulation changes, sometimes radically, from discipline to discipline.[45] I have argued for its continuities across disciplines so as to project it where it has not yet made much headway. What then might it look like in the disciplines of the humanities?

            For a suggestive answer, the best I know, I turn to John Wall’s Virtual Paul’s Cross project (which has led to the larger Virtual St Paul’s).[46] [FIGURE 10] It is an auditory simulation of John Donne’s Gunpowder Day sermon at Paul’s Cross as it was in London on 5 November 1622 with supporting visualisations. As you will know, both the medieval St Paul’s Cathedral and the Paul’s Cross preaching station within its grounds were destroyed together with the surrounding buildings in the Great Fire of 1666. So, in simulating the acoustics there is much room for conjecture.

            Wall’s aim is to explore Early Modern preaching through performance by simulating all of what Donne’s congregation could be reasonably expected to have heard on that day. Wall and a number of other Early Modernists argue that the sermons of the time were their performances, the texts we have merely traces of them. Consider the argument by analogy: [FIGURE 11] would we mistake the snippet at the upper left of Figure 11, from the score of the Goldberg Variations, for Bach’s music? The bit at the lower right from the script of Tennessee Williams’ A Streetcar named desire for the play? Of course not; we would know immediately that they are instructions for interpretative performances of the music and the drama respectively, just as now (I hope) you will think of programming code such as shown in Figure 12 [FIGURE 12] as instructions for a different sort of interpretative performance. But we are so accustomed to silent reading that we are likely to regard the text of Figure 13 [FIGURE 13] as the primary object, and so by extension to overlook that for the sermon, in Paul’s words from his letter to the Romans, ‘faith comes by hearing (10:17).

            Social cohesion and political unity come also. Paul’s Cross sermons were composed extempore from notes by a preacher who ‘faced a congregation gathered in the open air and surrounded by a host of distractions, including the birds, the dogs, the horses, the bells, and each other.[47] Dramatic engagement was essential to their success. What do we need as scholars in order to get closer to the sermon as it was?

            Near-contemporary evidence of Donne’s sermon was confined to that first printed edition of 1649 [FIGURE 14] until Jeanne Shami identified British Library MS-Royal-17.B.XX in 1995 as ‘a scribal presentation copy […] corrected in Donne’s own hand’.[48] This is very likely the manuscript that was produced at the request of James I from Donne’s notes, hours, days or weeks after the event. Twenty-seven years passed before the text was printed. The Great Fire destroyed almost everything – but not everything – relevant to the reconstruction just 17 years after that.

            So what is going on? The project, Wall has written, is ‘about what we are doing when we believe we have discovered, from our experience with a digital environment, things about past events that are not documented by traditional sources’.[49] Indeed, what are we doing? This is the brink of the new – or is it so very new? One may wish to retreat to safer ground, but I wonder just how safe that ground is when one looks closely, how much we build, inferential step by inferential step. And is safety our goal? ‘Positive knowledge’ of the sort latter-day positivists quest for is ephemeral. The history, philosophy, sociology and psychology of the sciences from Kuhn onwards furnish a good corrective. Again I refer you to Hacking. We must, of course, be scrupulously careful and cautious. Imagination is a wild beast.

            My hope for Wall’s wonderful project is that someday soon it can become a dynamic simulation in which you and I can twiddle the knobs and see what happens if this or that conjecture is altered. Serio ludere, seriously to play – and play imaginatively – is what our machine is all about. Only Wall and his collaborators at North Carolina State University can do that now, and only quite slowly. And I very much hope that they, as they work, are keeping laboratory notebooks so that we can begin to follow in David Gooding’s footsteps. To do that we need a much wider, more institutionally committed gathering of the disciplinary tribes. In my book that’s what digital humanities is for.

 


[1] In this lecture I do not consider the question of artificial intelligence. I would argue, similarly to Paul Armer, that AI, along with recent work in ethology, pluralises ‘intelligence’, or better, shifts our attention from it to the activity and processes of reasoning (‘Attitudes toward intelligent machines’, in Computers and thought, ed. Edward A. Feigenbaum and Julian Feldman, New York, 1963, p.389-405; here p.390-92).

[2] Norri Kageki, ‘An uncanny mind’, IEEE Robotics and automation magazine 19.2 (June 2012), p.112, 106, 108.

[3] Franco Moretti, ‘Conjectures on world literature’, New left review (2000), 1, p.54-68. The German original is: ‘Meine Bestimmung, es schlechter zu sagen, / als ich es verstehe’ (Act II, Scene 5; Arnold Schoenberg, Schoenberg’s ‘Moses and Aaron’, ed. Karl H. Wörner, transl. Paul Hamburger, London, 1963, p.192).

[4] ‘Theoricos Empiricis felici connubio zu conjugiren und mit einem des andern defectus zu suppliren’ (Gottfried Wilhelm Leibniz, Grundriss eines Bedenkens von Aufrichtung einer Societät [1671], in Sämtliche Schriften und Briefe, Reihe 4. Politische Schriften, Band 1, Akademie der Wissenschaften der DDR, Berlin, 1983, p.538).

[5] ‘Archaeology and philosophy: the dirt and the word’, Presidential Address 1995, Transactions of the American Philological Association 126 (1996), p.1-10.

[6] ‘Experiencing the bust’, College Station, Texas: Initiative for Digital Humanities, Media, and Culture, http://idhmc.tamu.edu/node/191#fn:hard (18 November 2017), responding to Timothy Brennan, ‘The Digital humanities bust’, Chronicle of higher education (15 October 2017).

[7] Radiant textuality: literature after the world wide web (New York, 2001), p.79.

[8] Principles of literary criticism (London, 1924), p.1.

[9] James Essinger, Jacquard's web: how a hand-loom led to the birth of the information age (Oxford, 2004); Jessica Riskin, The Restless clock: a history of the centuries-long argument over what makes living things tick (Chicago, 2016); Vernon Pratt, Thinking machines: the evolution of artificial intelligence (Oxford, 1987), p.113-25. In Lady Ada Lovelace’s words, ‘the Analytical Engine weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves’ (Ada Lovelace, translator’s notes to L. F. Menabrea, ‘Sketch of the Analytical Engine invented by Charles Babbage Esq.’, in vol.3 of Scientific memoirs, selected from the transactions of foreign academies of science and learned societies, and from foreign journals, ed. Richard Taylor, London, 1843, p.666-731; here p.696 [see the Internet Archive]).

[10] Sinead O’Sullivan, ‘The sacred and the obscure: Greek in the Carolingian reception of Martianus Capella’, The Journal of medieval Latin 22 (2012), p.67-94.

[11] Mary A. Rouse and Richard H. Rouse, ‘The Development of research tools in the thirteenth century’, in Authentic witnesses: approaches to medieval texts and manuscripts (Notre Dame IN, 1991), p.221-55; R. H. Rouse and M. A. Rouse, ‘The Verbal concordance to the Scriptures’, Archivum fratrum praedicatorum 44 (1974), p.5-30.

[12] ‘quamquam et in Vetere Nouum lateat, et in Nouo Vatus pateat’, Augustinus, Quaestiones in Exodo 73. On typology see esp. Erich Auerbach, ‘Figura’, trans. Ralph Mannheim, in Scenes from the drama of European literature, Theory and history of literature, vol.9 (Minneapolis MN, 1984, 1944).

[13] ‘A synopsis of linguistic theory, 1930-1955’, in Studies in linguistic analysis (Oxford, 1957), p.1-32; here p.11.

[14] Mark D. Johnston, The Spiritual logic of Ramon Llull (Oxford, 1987).

[15] Thing knowledge: a philosophy of scientific instruments (Berkeley CA, 2004).

[16] See Willard McCarty, Humanities computing (Basingstoke, New York, 2014, 2005); Mary S. Morgan, The World in the model: how economists work and think (Cambridge, 2012).

[17] The distinction is Clifford Geertz’s, for which see Clifford Geertz, The Interpretation of cultures: selected essays (New York, 1973), p.93f. By implication, as Marvin Minsky points out, all models are trinary, involving the modelled object, the modeller and the model he or she fashions in accordance with his or her interpretation of the object. See ‘Matter, mind, and models’, in Semantic information processing, ed. Marvin Minsky (Cambridge MA, 1968), p.425-32; here p.426. See also http://groups.csail.mit.edu/medg/people/doyle/gallery/minsky/mmm.html (19/11/17).

[18] ‘Diary’, London review of books 39.22 (16 November 2017), p.37-39.

[19] ‘Varying the cognitive span: experimentation, visualization, and computation’, in The Philosophy of scientific experimentation, ed. Hans Radder (Pittsburgh PA, 2003).

[20] David Gooding, Experiment and the making of meaning (Dordrecht, 1990).

[21] ‘How do scientists reach agreement about novel observations?’, Studies in the history and philosophy of science 17.2 (1986), p.205-30; here p.208.

[22] Computers as theatre, 2nd edn. (Upper Saddle River NJ, 2014).

[23] ‘Being that can be understood is language’, in Gadamer’s repercussions: reconsidering philosophical hermeneutics, ed. Bruce Krajewski (Berkeley CA, 2004), p.21-29.

[24] Translator’s notes to L. F. Menabrea, ‘Sketch of the Analytical Engine invented by Charles Babbage Esq.’, p.722; see also Alan Turing, ‘Computing machinery and intelligence’, Mind 49.236 (1950), p.433-60; here p.450, 454.

[25] Pamela McCorduck, Machines who think: a personal inquiry into the history and prospects of artificial intelligence (San Francisco, 1979), p.159, 173, cf. p.126. See also Paul Armer, ‘Attitudes toward intelligent machines’, in Computers and thought, ed. Edward A. Feigenbaum and Julian Feldman (New York, 1963), p.389-405.

[26] Planning and coding of problems for an electronic computing instrument, Report on the mathematical and logical aspects of an electronic computing instrument. Part II, vol.1-3 (Princeton NJ, 1947). https://library.ias.edu/files/pdfs/ecp/planningcodingof0103inst.pdf (18 November 2017).

[27] This provision is noted by Lady Lovelace, p.675.

[28] Planning and coding of problems for an electronic computing instrument, p.2.

[29] The New science of management decision (New York, 1960); see also Edward A. Feigenbaum and Julian Feldman, eds., Computers and thought (New York, 1963), p.3-4.

[30] McCorduck, Machines who think, p.71

[31] Goldstine and von Neumann, Planning and coding of problems for an electronic computing instrument, p.2.

[32] ‘The general and logical theory of automata’, in Cerebral mechanisms of behavior. The Hixon symposium, ed. Lloyd A. Jeffress (New York, 1951), p.1-41; here p.16.

[33] ‘Computing machinery and intelligence’, p.454.

[34] ‘Never say always again: reflections on the numbers game’, in Text and genre in reconstruction: effects of digitalization on ideas, behaviour, products and institutions, ed. Willard McCarty (Cambridge, 2010), p.13-16; https://litlab.stanford.edu/pamphlets/ (18 November 2017).

[35] For the Forum see the Digital Roman Forum project, http://dlib.etc.ucla.edu/projects/Forum/ (18 November 2017); for the Theatre of Pompey, see the Pompey Project, http://www.pompey.cch.kcl.ac.uk (18 November 2017).

[36] David Hawkins, Inception until August 1945. Vol. 1 of Manhattan district history. Project Y. The Los Alamos project (Los Alamos NM, 1946). http://www.osti.gov/manhattan-project-history/Resources/library.htm (18 November 2017).

[37] Peter Galison, ‘Computer simulations and the trading zone’, in The Disunity of science: boundaries, contexts, and power, ed. Peter Galison and David J. Stump (Stanford CA, 1996), p.118-57.

[38] William A. Kretzschmar Jr and Ilkka Juuso, ‘Simulation of the complex system of speech interaction: digital visualizations’, Literary and linguistic computing 29.3 (2014), p.432-42; Philip H. Smith Jr, ‘The Computer and the humanist’, in Computers in humanistic research: readings and perspectives, ed. Edmund A. Bowles (Englewood Cliffs NJ, 1967), p.16-28.

[39] Willard McCarty, ‘Modelling the actual, simulating the possible’, in The Shape of data in digital humanities, ed. Julia Flanders and Fotis Jannidis (London, 2018 [forthcoming]).

[40] Bronać Feran and Elizabeth Fisher, eds., The Experimental generation, special issue of Interdisciplinary science reviews 42.1-2 (2017).

[41] Paul Humphreys, Extending ourselves: computational science, empiricism, and scientific method (Oxford, 2004), p.8.

[42] ‘Knowledge as algorithm and metaphor’, in The Origins of knowledge and imagination, Silliman Memorial Lecture, Yale University (New Haven, 1978), p.41-63; here p.59.

[43] Intimations of infinity: the cultural meanings of the Iqwaye counting and number system (Oxford, 1988), p.5.

[44] On ethnomathematics see Marcia Ascher, Mathematics elsewhere: an exploration of ideas across cultures (Princeton, 2002); on the history of combinatorics see Robin Wilson and John J. Watkins, eds., Combinatorics ancient and modern (Oxford, 2013), Part II.

[45] Evelyn Fox Keller, ‘Models, simulation, and “computer experiments”’, in The Philosophy of scientific experimentation, ed. Hans Radder (Pittsburgh PA, 2003), p.198-215.

[46] For both, see the Virtual St. Paul’s Cathedral Project https://vpcp.chass.ncsu.edu (18/11/17).

[47] John N. Wall, ‘Transforming the object of our study: the Early Modern sermon and the virtual Paul’s Cross project’, Journal of digital humanities 3.1 (2014). http://journalofdigitalhumanities.org/3-1/transforming-the-object-of-our-study-by-john-n-wall/ (18/11/17).

[48] ‘New manuscript texts of sermons by John Donne’, English manuscript studies 1100-1700 15 (2007), p.77-119; here p.77.

[49] John N. Wall, ‘Gazing into imaginary spaces: digital modeling and the representation of reality’, New technologies in medieval and renaissance studies 6 (2016), p.283-317; here p.283.