text, editing, and bibliography in the electronic age
Electronic textuality is a relatively recent concept, yet one that has already had a significant impact upon the practice of scholarly editing. Scholars have debated the subject of textual bibliography and the issue of copy-text throughout the twentieth-century without, it seems, reaching any firm conclusions. The term ‘copy-text’ was first coined by Ronald McKerrow almost one hundred years ago (McKerrow and Nash 1904) but there still remains disagreement about how a text can or should be established. The arrival of electronic texts make this problem even more complex.
Advances in information technology have meant that scholars now have access to new and ever more sophisticated tools to assist them in the preparation of traditional codex editions and to aid textual analysis. Increasingly however, some editors are choosing to exploit the potential of digitised material and the advantages of hypertext to produce texts in an electronic format in either editions or archives. This raises various issues including the role of the editor and the relationship of the reader to the text.
One of the most influential and oft quoted theses on the subject of textual scholarship and one which has provoked a significant amount of debate, is W. W. Greg’s paper entitled ‘The Rationale of Copy-Text’ (1950). In this article, Greg highlights the difficulties of the prevailing editorial practice of attempting to select whichever extant text is the closest to the words that an author originally wrote and using this as the copy-text. In the case of printed books, this is generally considered the first edition, but Greg argues that an over-reliance on this one text, to the exclusion of all others, is problematic.
He advocates that a distinction be made between ‘substantive’ and ‘accidental’ readings of a text and suggests that the two should be treated differently. (He uses these terms to distinguish between readings that affect the meaning and readings that only affect the formal presentation of a text.) Greg asserts that it is only when dealing with accidentals that editors should adhere rigidly to their chosen copy-text and that in the case of possible variants in substantive readings there is a case to be made for exercising editorial choice and judgement. He argues that it is only through being allowed to exercise judgement that editors can be freed from what he terms ‘the tyranny of the copy-text’ (p. 26).
This argument is taken up and developed by Fredson Bowers (1964; 1970). Bowers takes issue with some of the finer points of Greg’s argument, but agrees that whilst rules and theories are necessary, the very nature of editing means that a certain amount of editorial judgement will always be needed. G. Thomas Tanselle (1975) examines the arguments of both scholars, expands them and looks at how their work and theories have affected the practice of scholarly editing. Greg, Bowers, Tanselle and others have slight differences of emphasis. They are however, all in broad agreement with the principle of synthesizing two or more variant editions into one text that represents as closely as possible an author’s intention.
The debate about copy-text and its role in scholarly editing rests largely on the status of authorial intention and the extent to which this is possible to discern and represent in a text. Michael Foucault, in his paper entitled ‘What is an author?’ (1984), argues that even when there is little question about the identity of author of a text, there remains the problem of determining whether everything that was written or left behind by him should be considered part of a work. Do notes in a margin represent an authorial addition or amendment, or did the author simply scribble in the margins a sudden thought that he wanted to remember and refer to later? Such issues remain a subject of debate and are some of the many problems with which editors are faced.
The practice of editing will always generate problems that scholars need to address and this is the basis for David Greetham (1994) and Peter Robinson’s (1997) assertions that to a certain extent, all editing must be seen as conjectural. However, in his examination of the history of textual criticism, Greetham finds that there has been a fluctuation between two equally extreme schools of thought.
The first, he suggests, maintains that a correct reading of a text is discoverable ‘given enough information about the texts and enough intelligence and inspiration on the part of the editor’ (p. 352). The opposing position is one that claims that any speculation on the part of an editor is likely to result in a move away from authorial intention. Because of this, scholars that hold this belief argue that documental evidence should be given priority over editorial judgement and wherever possible this documental evidence should be in the form of only one document – that chosen as the copy-text.
Yet scholars have found that it is sometimes impossible to establish one ‘correct’ text. Jerome McGann (1983; 1996; 1997) believes the very notion to be a falsity and Peter Donaldson (1997) argues that traditional scholarly editions can be misleading as their very nature suggests that a text is fixed and authoritative when the reality is often very different.
Taking the plays of Shakespeare as an example, he suggests that the collaborative nature of life surrounding the London theatres in the Renaissance combined with the fact that the author did not intend his work to be published, means that variants cannot and should not be ignored. Moreover, he contends, in some cases a single original text may never have existed. Donaldson argues that technology can be used to create new forms of text that incorporate variants in a way that is not practical in a codex edition. Donaldson is himself involved in a project that seeks to do this and he refers to his own experiences in assembling an electronic archive of the works of Shakespeare.
Electronic texts provide some solutions to the problems of editing, but they also raise new issues and opinions are divided about the way in which they can best be used. Some scholars welcome digital texts as a tool to aid the preparation and production of traditional scholarly editions whilst others prefer to look to electronic textuality as a medium for the publication of a different type of edition – an electronic edition.
Several authors (Donaldson 1997; Greetham 1997; Hockey 2000; Robinson 1997) examine the way in which new developments in information technology affect the traditional process of scholarly editing. Robinson for example, examines the analytic functions of electronic text and provides examples of instances in which computer aided collation has assisted in the preparation of scholarly editions. He cites his own experiences in the production of Chaucer’s The Wife of Bath’s Prologue on CD-ROM and explains how he used the particular techniques of computerised cladistic analysis as a method of textual criticism. Further information about computerized collation can be found in Hockey (1980) and Robinson (1994).
(Cladistic analysis has been developed from systematic biology. Susan Hockey (2000) describes it as ‘software that takes a collection of data and attempts to produce the trees of descent or history for which the fewest changes are required, basing this on comparisons between the descendents’. Cladistics is particularly useful in cases where manuscripts are lost or damaged.)
In addition however, Greetham (1994) and Robinson (1993; 1997) discuss the way in which, in an electronic edition, hypertext can be used to solve the problem of textual variants. The term ‘hypertext’ was coined in the 1960s by Ted Nelson (Landow 1992) and it refers to a means of linking documents or sections of documents and allowing a reader to navigate his or her own way through a series of paths in a non-linear way. Bolter (1991), Landow (1992) and McGann (1997) all write in detail about the technology behind hypertext, its functions and the theories that surround it.
Greetham suggests that decisions that were once the responsibility of the editor can be largely transferred to the reader as hypertext allows all possible variants to be included and linked in an electronic edition. This means that editors do not have to wrestle with the problem of authorial intention or give priority to one text but can incorporate several variants, allowing readers to select the most appropriate text for their particular needs.
This type of editing is, as Greetham argues, distinct from the methods of either establishing a text or accurately reproducing a particular version of a work in a critical edition. The desired result with electronic editing is not, according to McGann (1983) and others, a single conflated text as advocated by the Greg / Bowers school of editing but one containing such multiple variants.
McGann believes that this type of edition frees the reader from the controlling influence of editors, and George Landow (1992) suggests that it facilitates a greater degree of interaction between the reader and the text.
Kathryn Sutherland (1997) however, warns that this type of text places greater demands on a reader than a traditional codex edition. A hypertext edition that contains multiple variants necessarily requires a reader to select material, choose from amongst the possible variants and, therefore, exercise discrimination. She also points out, in an allusion to Barthesian distinctions, that a hypertext edition offering choice amongst variants is, in effect, offering the reader the ‘disassembled texts’ rather than the ‘reassembled work’ (p. 9).
McGann (1996; 1997) suggests that scholarly editions in codex form have limitations because their structure is too close to that of the material that they analyse. He asserts that hypermedia projects such as the Rossetti Hypermedia Archive with which he is involved, offer a different type of focus that does not rely on one central document. He argues that hyperediting allows for greater freedom and has the added advantage of giving readers access to more than just the semantic content of a primary text.
Moreover, McGann believes that hypertext is functioning at its optimum level when it is used to create hypermedia editions that incorporate visual and audio documents. Robinson (1997) however, warns that editors working on major electronic editions are producing material that will not be used to its full potential until there are further developments in the field of textual encoding and software that is more widely available.
P. Aaron Potter (c1997) takes issue with McGann and Landow’s ideas. He argues that a Web page editor controls the material that appears on the screen to an even greater extent than does an editor working on a traditional codex edition. A hypertext document is not a non-sequential document because an editor has inserted links and chosen what he considers the most suitable places for those links to be. A reader can, therefore, only navigate to a part of a document to which an editor has chosen to offer a path.
Hypertext links, asserts Potter, are ‘no more transparent that any reasonable index’ and whilst offering a choice amongst variants, and allowing readers to share some of the editorial functions, electronic editions are far from being either authorless or editorless texts. Moreover, her refers to Foucault’s theories and suggests that, as is often the case, hypertext is an example of a concept that is purporting to offer greater freedom, when in reality it is just more successful at hiding the mechanisms by which it exerts control – in this instance, control of a reader.
Susan Hockey (2000) warns that whilst editors working on electronic editions are freed from many of the limitations of printed books, and the need to rely on one particular text or reading, there is a danger of such projects becoming overly ambitious. She asserts that the inclusion of too much source material can result in editions that have little scholarly value. She maintains that source material should not replace the critical material that makes scholarly editions valuable. Similarly, Sutherland (1997) suggests that a balance needs to be struck between the quantity and the quality of the material that electronic editors choose to include. Claire Lamont (1997) examines the specific problems of annotation and compares how they differ in a codex and electronic edition. Hypertext provides the promise of annotations which are easier to access and which conceivably, can contain greater quantities of material.
Lamont draws attention to the fact that hypertext editions also have the advantage over traditional editions because they can be updated whenever necessary without the need to prepare an entire new edition and without the cost and time that this inevitably involves. However, rather than solving the problems of annotation such as where, what, and how much to annotate, Lamont concludes that hypertext has simply resulted in ‘another arena in which the debate may continue’ (p. 63). Sutherland (1997) sums up the feelings of many less fervent supporters of electronic textuality by suggesting that the electronic environment is perhaps best thought of as ‘a set of supplementary possibilities’ (p. 7). These possibilities will be debated by editors, theorists and scholars in a comparable way to which they have debated and continue to consider the medium of the book.
Contrary to the optimistic note struck by writers such as McGann (1997), Landow (1992), Lanham (1993) and others concerning an electronic text’s facility to empower the reader, Sven Birkerts (1995) expresses concern at the effect of electronic texts in a book that is pessimistically entitled The Gutenberg Elegies: The Fate of Reading in an Electronic Age. Birkerts suggests that methods of electronic storage and retrieval have a detrimental effect upon a reader’s acquisition of knowledge. Information in an electronic medium, he believes, remains external – something to be stored and manipulated rather than absorbed.
Without claiming to support Birkerts’ theories, Sutherland (1997) suggests that if they do prove to be correct then the implications will be wide-ranging. The scholar who works for years seeking to become and expert in his chosen field for example, could conceivably be transformed by the computer into little more than a technician – able to locate and manipulate information, but without having any real understanding of it.
Rapid advances in information technology are increasingly becoming the source of debate amongst scholars who seek to determine both the best way of taking advantage of technology and the implications of so doing. Greetham (1997) rightly points out that digitisation is only one small stage in the evolution of texts and Sutherland (1997) remarks that computers, like books, are simply ‘containers of knowledge, information [and] ideas’ (p. 8).
However, as electronic textuality continues to emerge as a force to which the academic community will have to adapt there will, no doubt, be a continued explosion in the literature that addresses the issues that it raises. Jerome McGann is seen by more conservative scholars as too messianic in his endorsement of the electronic medium and it is possible that some of his predictions may well prove to have been extreme. However, in his claim that hyperediting is ‘what scholars will be doing for a long time’ (1997), it is likely that he will, ultimately, be proved right.
© Kathryn Abram 2002
Birkerts, Sven. 1995. The Gutenberg Elegies: The Fate of Reading in an Electronic Age. New York: Fawcett Columbine.
Bolter, Jay David. 2001. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, N.J. : L. Erlbaum Associates.
Bowers, Fredson. 1964. Bibliography and Textual Criticism: The Lyell Lectures, Oxford, Trinity Term 1959. Oxford: Clarendon Press.
______. 1970. ‘Greg’s “Rationale of Copy-Text” Revisited’. Studies in Bibliography Volume 31 , pp. 90-161.
Chaucer, Geoffrey. 1996. The Wife of Bath’s Prologue on CD-ROM. ed. Peter M. W. Robinson. Cambridge: Cambridge University Press.
Donaldson, Peter. 1997. ‘Digital Archive as Expanded Text: Shakespeare and Electronic Textuality’, in Electronic Text: Investigations in Method and Theory. ed. Kathryn Sutherland, pp. 173-97. Oxford: Clarendon Press.
Foucault, Michel. 1984. ‘What is an author?’, in The Foucault Reader. ed. Paul Rabinow, translated by Josue V. Harari, pp. 101-20. New York: Pantheon Books.
Greetham, D. C. 1994. Textual Scholarship: An Introduction. New York and London: Garland.
______. 1997. ‘Coda: Is It Morphin Time?’, in Electronic Text: Investigations in Method and Theory. ed. Kathryn Sutherland, pp. 199-226. Oxford: Clarendon Press .
Greg, W. W. 1950. ‘The Rationale of Copy-Text’. Studies in Bibliography Vol. 3 (1950-1951), pp. 19-36.
Hockey, Susan M. 1980. A Guide to Computer Applications in the Humanities. London: Duckworth.
______. 2000. Electronic Texts in the Humanities: Principles and Practice. Oxford: Oxford University Press.
Lamont, Claire. 1997. ‘Annotating a Text: Literary Theory and Electronic Hypertext’, in Electronic Text: Investigations in Method and Theory. ed. Kathryn Sutherland, pp. 47-66. Oxford: Clarendon Press.
Landow, George P. 1992. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: John Hopkins University Press.
Lanham, Richard A. 1993. The Electronic Word: Democracy, Technology, and the Arts. Chicago; London: University of Chicago Press.
McGann, Jerome J. 1983. A Critique of Modern Textual Criticism. Charlottesville: University of Chicago Press.
______. 1996. ‘Radiant Textuality’. Accessed on 19 February 2002.
______. 1997. ‘The Rationale of Hypertext’, in Electronic Text: Investigations in Method and Theory. ed. Kathryn Sutherland, pp. 19-47. Oxford: Clarendon Press.
______. 1997. ‘The Rossetti Hypermedia Archive’ [Web page]. Accessed on 19 March 2002.
McKerrow, Ronald B., and Thomas Nash. 1904. The Works of Thomas Nashe. Vol. 1. London: A.H. Bullen.
Potter, P. Aaron. c1997. ‘Centripetal Textuality’. Accessed on 19 February 2002.
Robinson, Peter M. W. 1993. The Digitization of Primary Textual Sources. Oxford: Office for Humanities Communication Publications.
______. 1994. ‘Collate: A Program for Interactive Collation of Large Textual Traditions’, in Research in Humanities Computing 3. eds. Susan Hockey, and N. Ide, pp. 32-45. Oxford: Oxford Universtiy Press.
______. 1997. ‘New Directions in Critical Editing’, in Electronic Text: Investigations in Method and Theory. ed. Kathryn Sutherland, pp. 145-71. Oxford: Clarendon Press.
Sutherland, Kathryn, ed. 1997. Electronic Text: Investigations in Method and Theory. Oxford: Clarendon Press.
Tanselle, G. Thomas. 1975. ‘Greg’s Theory of Copy-Text and the Editing of American Literature’. Studies in Bibliography Volume 28, pp. 167-231.