Myths, misconceptions and an alternative perspective on information usage and the electronic medium

Andrew Dillon

This item is not the definitive copy. Please use the following citation when referencing this material: Dillon, A. (1996) Myths, misconceptions and an alternative perspective on information usage and the electronic medium. In J. F. Rouet et al (eds.) Hypertext and Cognition, Mahwah NJ: LEA, 25-42.

Abstract

Hypertext represents the forefront of a technological wave in education that is driven more by enthusiasm for the computer than by reliable knowledge of the human user. This chapter outlines some of the myths and misconceptions that have emerged in recent years about hypertext and its use for information-intensive activities such as learning. In so doing, it emphasizes experimental evidence over wishful thinking and outlines an ergonomic perspective on human information usage that seeks to maximize usability and ultimately the acceptability of this emerging technology.

Introduction

'Liberation of the reader', 'an information revolution', and 'freedom from constraints' are just three of the catchphrases that are bandied about with immodest regularity when talk amongst many educators, computer scientists and even some psychologists turns to hypertext and its application to education (see e.g., Barrett, (1988), Beeman et al (1987)). In a world of increasing chip speeds and decreasing software costs, the computer provides us with both the image of our selves as information processors and the potential for our salvation. This is heady stuff indeed.

The beauty of these perspectives is that you can buy into one and dismiss the other and you will, either way, still be eligible for membership of the technocrat club - that exclusive band of late twentieth century scholars who claim to have seen the light (and know it flickers when active). Many cognitive scientists accept the computer metaphor of mind so uncritically that it is inconceivable for them that mental life does not flow through buffers and circuits in an algorithmic embrace of the biological hardware (see e.g., Johnson-Laird, 1988) . Yet these people may find little in common with others, often educationalists (e.g., Cunningham et al., 1993) who are dismissive of such artifactual philosophizing. Putatively pragmatic, educationalists prefer applications of technology and want learning environments to be virtual, networked and non-linear, and every child to have a mouse, a menu and a math co-processor.

But what of the rest of us; those people who feel affinity with neither perspective; who see the computer for what it is - a potentially powerful tool when well designed and dull collection of plastic and electronic circuits when not; no more, no less? What of those who view the impact and uptake of technology as both influenced by and influencing the social structures in which it is embedded? We seem banished to an intellectual no-man's land, somewhere between cognitive science and constructivism, labeled "luddite" at best and "technophobic" at worst. Our only sin being to question the computer metaphor of mind and to seek some evidence on the technology's impact on education that meets minimal standards of scientific rationality.

What I will attempt to describe in this chapter is an alternative to the new technocracy that avoids technophobia and which seeks to demonstrate that it is possible to be enthusiastic about new technologies without losing perspective of their role- the service of humans.

Hypertext and the information revolution

It is generally assumed (and rarely demonstrated) by advocates of hypertext that humans are constrained by the (supposedly) inherent linear qualities of paper and forced to access and use information in a strongly directed fashion (see e.g., Collier, 1987, Jonassen, 1989b). In the world of hypertext, this is seen as a bad thing. Hypertext is thus proposed as a superior presentation medium that can support more "natural" interactions with a variety of information sources. This idea can be traced back to Bush (1945) with his emphasis on the role of association in cognition but although less explicitly associationist, current emphasis still focuses on the limitations of paper and the potential benefits of electronic documents (see e.g., Nielsen, 1990).

In recent years however, the glib predictions about the death of paper, the emergence of the paperless world and the unmistakable advantages of hypertext have rung hollow. It was Jonassen (1982) who said:

"in a decade or so, the book as we know it will be as obsolete as is movable type today" (p. 379).

and Ted Nelson (1987) who argued:

"the question is not can we do everything on screens, but when will we, how will we and how can we make it great? This is an article of faith - its simple obviousness defies argument."

Yet, the last decade of empirical evidence suggests that this article of faith, like most religious beliefs, does not defy argument. Research has shown that reading from screens is frequently slower (Gould et al., 1987), less accurate (Wilkinson and Robinshaw, 1987) and more fatiguing (Cushman, 1986). Since hypertext has generally failed to show the significant benefits in reading performance and learning many predicted few researchers are now willing to make the sweeping statements of the medium's earlier advocates (for a review of empirical findings on reading from paper and screen see Dillon, 1992).

Furthermore, paper shows no signs of dying out as a result of this revolutionary new medium. Girill and Luk (1983) produced evidence a decade ago to the effect that for every one line of text read on screen, over one hundred were printed out, and this had dropped to a 1:17 ratio three years later (Girill et al 1987). But even as the ratio dropped, the total amount of paper printed out increased. It should be seen as cautionary to those predicting a paperless world that the most successful applications of information technology to date have been the word processor, the fax and the photocopier - technologies that provide everyone with the power to produce limitless amounts of paper at the push of a button!

Interestingly, such poor predictive ability concerning the impact of new technology should not surprise us. Scientists, technologists and business analysts have a weak track record in predicting the social impact of new tools, (McKnight et al (1991) remark that over time, innumerable artefacts have been predicted to change society including such unusual ones as the hosepipe and the beret). Humans absorb technological innovation at a pace that suits them, rejecting some and invariably putting others to only partial use. In so doing, they exploit technology in ways often unforeseen by the experts or the designers, rendering accurate predictions of impact difficult.

As a human factors professional or ergonomist, my interest in hypertext reflects a concern with the design of acceptable technology in general and the analysis of human information usage in particular. By acceptable I draw on Shackel's (1991) definition of acceptable technology as one which satisfies specific criteria for functionality, usability and cost. Theoretical perspectives are undoubtedly enriching and essential for the long term progress of the field, but, ultimately, empirical evidence is required both to bring about changes in contemporary design contexts and to introduce greater rationality into discussions of such technologies (Landauer, 1991). Far too often, the theoretical underpinnings of hypertext design spring from cognitive science or educational psychology with little alteration, as if such theoretical models could jump the theory-practice gap without difficulty. Yet it has become clear over the last decade of research in the field of Human-Computer Interaction (HCI) that transferring social and cognitive science findings to design practice is rarely such a straightforward process (see e.g., Klein and Eason 1991).

As a result, even in the absence of supporting evidence, the practice of hypertext design has been accompanied by an uncritical acceptance of a host of quasi-psychological notions of reading and cognition. Taken from the laboratory and applied to the everyday context of reading, such theories become diluted and misunderstood. Rather than using the real-world as a shaper of theory, contemporary hypertext thinking seems tied to concepts of association and non-linearity of access even as it distorts them. Such issues are of importance since our theories of information use and human cognition are themselves shapers of future technologies.

Several myths and misconceptions about reading permeate the hypertext design world which need to be corrected if progress is to be made in this field. I will examine these myths in turn, drawing on evidence at all times in support of my points. I will attempt to show that hypertext has largely failed to fulfill much of its early promise and that this should not surprise us as, in reality, the Zeitgeist is currently technocentric, despite expressed concern for users. That is, the very term 'user-centred' is in danger of becoming cliched and it is used regularly in the absence of any genuine attempt to understand users (see, Dillon et al (1993b) for evidence). Furthermore, I contend, current design thinking is always prone to confuse capability in technological terms with acceptability in human terms. As a result of numerous empirical examinations of readers' verbal protocols and performance with a variety of documents, paper and hypertext, a framework for evaluation of hypertext applications is proposed which emphasizes usability as the major test for new technologies.

Myth 1: Associative linking of information is natural in that it mimics the workings of the human mind.

Virtually all introductory writings on hypertext make reference to the notion of association and non-linearity and there is little to be gained by rehashing the arguments here (see e.g., Nielsen, 1990). What is important however, is the belief that the association of information deemed possible with hypertext is somehow more natural to users (as readers or learners) than paper presentations of the same information, and the effects this belief has on subsequent designs.

Naturalistic associationism as manifest in the hypertext literature holds that knowledge is represented cognitively in some form of semantic network or web. The exact form however is rarely precisely stated and terms such as schemata and networks, scripts and webs are employed by writers on the subject with little or no recourse to contemporary psychological developments (see e.g., Jonassen, 1993). Anderson's ACT* framework enables an associative view of cognition but is formally distinguished by researchers in the field of cognitive science from schema theory (Anderson, 1983). This is not to say that ACT* is to be favored over schema theory in hypertext design, but it does mean that in drawing on such theories one is at least appreciative of these differences. Without such clarity, conceptual confusion occurs and progress through theory-building is handicapped as people use terms in a subjective manner and claim theoretical support from inappropriate quarters.

Unfortunately, such distinctions in the cognitive science literature appear to have minimal impact on writers in the hypertext field. This results in a literature replete with theoretically dubious statements of the following kind:

"The practice of 'associative linking' and re-centering may be the best approximation of how a trained mind approaches a problem" (Delany and Gilbert , 1991, p.290)

or

"Schema theory contends that knowledge is stored in information packets or schemas that comprise our mental constructs for ideas. Schemas have attributes, which most often consist of other schemas" (Jonassen 1993, p153-154).

The inevitable result in contemporary writing on this subject is theoretical confusion and the rationalization of design decisions on superficial or weakly articulated cognitive perspectives. In terms of linking and associating information nodes, it has resulted in a pervasive sense of serving the user or learner simply by putting information into a system and enabling it all to be linked.

Even if the claims for hypertext are perhaps now couched less explicitly in terms of the "natural" relationship between cognition and non-linear information structures, they are still made so implicitly by many educators who believe the technology offers the potential for greater or richer learning experience than paper artifacts (see e.g., Small and Grabowski, (1992) for a contemporary example). Indeed Jonassen (1989a) explicitly argues that hypertext could be effective for the very reason that it could represent a model of the expert's knowledge structure and content that should be acquired by the learner.

This is to my mind an unproved (and perhaps even untestable) claim. The representation of the information or knowledge (and these terms are really not synonymous, see below) is only one aspect of providing a learning environment. As has been argued elsewhere (Dillon et al 1993a), a learner may be able to reproduce the teacher's knowledge representation as manifest in both hypertext and paper forms but this is no guarantee of meaningful learning having occurred. Certainly, Jonassen's most recent empirical attempts at demonstrating the value of modelling experts' knowledge structures with hypertext (Jonassen, 1993) failed to produce any desired effects, leading him to conclude that hypertext might not be an appropriate mechanism for supporting some learning tasks after all.

The semantic intention of the author is very different from the physical representation of the text or graphic on screen and if one really could convey meaning and organization in the knowledge sense merely by re-organizing the layout of the physical page or screen to highlight associations then we would have far fewer learning problems to worry about. Unfortunately, it is not that simple and the best efforts of researchers to date have failed to provide us with well-documented examples of enhanced learning through hypertext (see for example Rouet and Levonen, this volume, for a detailed discussion of the problems here).

Myth 2. Paper is a linear and therefore a constraining medium

Hypertext advocates portray paper as a limiting medium, claiming it imposes a "linear straitjacket of ink on paper" (Collier, 1987). This is usually contrasted with the much vaunted liberating characteristics of hypertext, which, with its basic form of nodes and links, is seen to be somehow freer or more natural. Nielsen (1990), for example, reflects the consensus view with the following summary:

"All traditional text ....is sequential, meaning that there is a single linear sequence defining the order in which it is to be read. First you read page one. Then you read page two. Then you read page three.....Whereas hypertext is nonsequential; there is no single order in which the text is to be read" (p.1).

Similar and even more extreme positions can be found in many of the writings describing the claimed wonders of hypertext (Nelson, 1987, Delaney and Landow 1992, inter alia)

Such views do not provide a fair representation of paper (or hypertext for that matter). Certainly reading a sentence is largely a linear activity (although eye movement data suggest that even here, some non-linearity does occur, see e.g., Just and Carpenter, 1980) therefore on screen, reading text is still as likely to be predominantly linear, at least at paragraph and sentence levels. The issue then is the extent to which linearity is imposed on the reader at the multi-sentence level in using the document.

Though paper texts may be said to have a physical linear format there is little evidence to suggest that readers are constrained by this or only read such texts in a straightforward start-to-finish manner (Pugh, 1975, Charney, 1987, Horney, 1993). For example, in an examination of academic journal usage, Dillon et al (1989) identified three reading strategies, only one of which could be described as linear, the other two involved rapid browsing and jumping throughout the text depending on the task goal at the time (check references, get a feel for the contents, identify statistical analysis etc.) for which the established structure of the text under discussion enabled multiple non-linear moves on the part of the reader. Further evidence to contradict the idea of constrained linear access can be found in a similar analysis of software manual usage (Dillon, 1991a).

With hypertext, movement might involve less physical manipulation (a mouse movement for example compared to turning and folding pages) or less cognitive effort (a selectable link to information compared to looking up an index and then manipulating the text) but these are matters of degree, not of category. We must be clear on this point or face accusations of distortion in our arguments. One could make a case for paper being the liberator as at least the reader always has access to the full text (even if searching it might prove awkward). With hypertext, the absence of links could deny some readers access to information and always force them to follow someone else's ideas of where the information trail should lead.

In many ways, this is little more than common sense; we knew this all along. However, the myth of paper's linearity has taken hold in the field despite evidence from researchers in related disciplines like text linguistics (de Beaugrande 1980) or textbook design (Whalley, 1993) that authors normally seek to convey their argument in text by utilizing a range of structural elements that are not constrained by linearity of delivery and uptake (e.g., core and adjunct text forms). Furthermore, recent evidence (Dillon, 1991b) indicates that experienced readers of paper documents have expectations of organization within documents that they use to guide manipulation and location in precisely such non-linear manners. In accepting too uncritically the myth, many hypertext developments seem more concerned with superficial presentational details than formal analysis of information presentation and processing.

An argument could be made for the effect of experience here, i.e., that the typical readers of the text types cited above were frequent, skilled readers of such texts and might have developed such reading strategies despite the much-claimed restriction of the paper medium. I would counter this with two arguments.

First, evidence has been provided that even when faced with a unique text form, typical readers respond by employing such strategies. McKnight et al (1990) created a paper version of a document that had only previously been available on hypertext and reported that even with no prior exposure to it, readers demonstrably accessed the material in a non-linear form. For sure, the readers in their experiment seemed to manifest more serial reading of sections but not one of them read the text in a simple start-to-finish manner we would expect to observe if they had been constrained by this medium.

Second, if hypertext is seen to be more liberating only for readers/learners who are exposed to it at the earliest possible stages in their reading development then of course, one can explain away the evidence McKnight et al provide. However, in so doing we move from a claim for hypertext as liberating for typical readers (the usual claim) to one where we advocate it as most suitable for the beginning reader. This is a very different claim, one which is rarely articulated in the literature and one that is likely to find little support given its logical implication for most users of current technologies.

Myth 3. Rapid access to a large manipulable mass of information will lead to better use and learning

The belief that enabling access to, and manipulation of masses of information in an obviously non-linear manner is desirable and will somehow increase learning (however measured) is ever-present in discussions on educational hypertext. Ambron (1988) for example states that with hypertext:

"users can browse, annotate, link and elaborate information in a rich, non-linear, multimedia database (which) will allow students to explore and integrate vast libraries of text, audio and video information" (italics added)

while Megarry (1988) says that in hypertext:

"the user explores knowledge in a non-linear and interactive fashion" (italics added)

Notice the emphasis on exploration and integration, particularly of "vast" libraries of information or "knowledge" that users will be able to achieve with this technology. How will this be achieved however? Since when did exploration of information inevitably lead to integration, particularly with large amounts of information? And can we really talk about exploring libraries of "knowledge"? Claims for "learner empowerment" (Small and Grabowski, 1992) are made without critical evidence that it is needed, is possible, or even works. The analysis of hypertext usage makes great play of the medium's potential but rarely demonstrates objectively how this is achieved. Furthermore, as Hammond (1993) asks, how many learning scenarios really require such free-ranging interactions over large amounts of information?

To date, the claims have far exceeded the evidence and few hypertext systems have been shown to lead to greater comprehension or significantly better performance levels. Clearly mere exposure to information is not enough for learning to occur (as we really always knew) (1). And where the medium fails to produce the effect that was required, too little emphasis is placed on relating the assumptions behind the design of the information presentation medium to the psychological activities of the learner. This concern with vast information sources over real human needs betrays the technocentric values of its proponents even while they talk in user-centred terms.

Years of educational research has attempted to design teaching systems, from Skinnerian programmed learning machines to constructivist technology. Furthermore, systems design is not essentially an educational issue but an informational one, and this distinction is all too rarely made in the hypertext community. As I have argued elsewhere (Dillon 1994), the artifacts we create are instantiations of our theories of the user. If these theories are based on naive assumptions (e.g., 'the human mind works by association' or 'paper forces people to read linearly') then it is only logical that the resulting technologies are unlikely to prove acceptable. While I would endorse iterative design in the search for better artifacts, it is a prerequisite of effective design practice that we articulate clearly the theories of the user that are driving them and seek to test their assumptions in order to improve such theories.

Myth 4: Future technologies will solve all current problems

Visions of paperless worlds and virtual libraries collapse relatively quickly when one experiences the reality of most HyperCard applications or Guide documents. This myth of a bright future is really little more than the dogma of technological determinism. Yet it turns up continually in writings on this new medium (see e.g., Delany and Landow, 1992). The vision of the future is always unbearably bright and the resulting loss of clarity is a natural consequence of such ill-informed imaginings. It is not even the case that new technologies have just produced new problems; they have, but have often failed to solve the old ones too.

Hypertext versions of academic literature have been examined with some success (see .e.g., McKnight et al 1991). But the problems of increasing learning with advanced technology remain and we are no nearer solving the problems of computer assisted learning with hypertext than we were with previous technologies. Furthermore, as long as we rely on educational theory or technocratic speculation alone, little progress appears possible.

An alternative perspective

Four myths have been examined and four myths have been found wanting. The new technology is neither naturally like us nor certain to lead to educational improvements. So where do we start if we want an alternative take on hypertext and do not wish to fall pray to technophobia?

From an ergonomic or human factors perspective(2):, designing hypertext systems is no different from designing any other interactive technology and therefore consideration in the first instance must be given to the learning task and users as learners. But what is the task in learning? Clearly, there can be no broad task that constitutes 'learning' in all its manifestations. Learning is a complex, multiply-determined activity or process which cannot just be equated with information retrieval, target location, navigation or memorization alone.

In human factors terms, learning would be viewed as the user's goal for which technology offers partial support, just as "getting work done" or "performing a job" are viewed as human goals beyond the level of interaction with a computer. These would be considered too high a level of analysis or description for artifact evaluation and of necessity would be divided into what are termed "tasks" (it should be noted that tasks in turn can be sub-divided into "acts" and the complete set of tasks a person performs is called their "job"). Indeed, it is rare to find mention of any such system goal in the ergonomics literature (i.e., "the system is designed to enhance learning") except in broad initial discussions of the design.

In this sense, learning, as a goal (or job, work, etc.) needs to be addressed at a task level where indeed aspects of information location, summarization of ideas, memory etc., may be identified. Such tasks can be analyzed and subsequently supported technologically. In so designing the technological support (hypertextual or otherwise), empirical methods can be employed to determine usability and to assess the extent to which the technology increases learner performance on that task (though this is usually a different issue from whether or not learning was being supported). Clearly, the analysis of real-world information usage and learning tasks would at least raise questions about the myths of hypertext outlined above.

Empirical work on hypertext is in short supply yet it is our best hope of progress in this field. What is salutary is that attempts at designing demonstrably effective hypertext systems (i.e., those satisfying the empirical test criterion) have needed to rely heavily on user testing to ensure even partial success. The SuperBook project at Bellcore (Landauer et al 1993) is a case in point. While it is common to see SuperBook cited (justifiably) as a hypertext success story, it is important to realize that early developments of SuperBook actually led to significantly poorer user performance relative to paper in certain tasks.

Although Landauer et al. managed to redesign the hypertext effectively they admit to being able to do so only on the basis of empirical data that had both highlighted substantial delays at certain task points due to poor system response rate and shown users to be employing sub-optimal search strategies. Even modifying the first version successfully still left room for further improvement as subsequent evaluations indicated other sources of user difficulty.

Similarly, McKnight et al (1992) designed a system following user-centered methods. On observing equivalent comprehension levels between this medium and a paper equivalent among university students they claimed their design to be a success. Given the then existing evidence in the field such a claim had some credence. That experiment involved matched-groups of students using either 40 000 word hypertext or paper as a resource for gathering material in order to produce an essay on an issue in a general area in which they had received prior instruction - a typical educational activity. The resulting essays were scored by an expert examiner in the field using a clearly articulated evaluation procedure, again much like a typical university-level examination.

Neither of these studies started from any detailed theoretical assumption about the learner, and made no claims for learner empowerment but concentrated first and foremost on task-based empirical methods. Furthermore, neither group would claim on the basis of their results that anything had been demonstrated about learning per se. McKnight et al (1992) for example concluded explicitly that they had only demonstrated in this situation, that hypertext could lead to comprehension levels equivalent to paper. No learning theory was invoked or required to understand the results. More importantly, the notion of using this technology in educational contexts is deemed a problem for analysis at a different level of discourse (one predicated on curriculum design, teaching methods, course requirements, performance assessment etc.), the educational level, not the design level.

What such studies do highlight is the importance of task and user variables as well as the need to analyze the process of interaction in order to produce effective performance outcomes on a range of information usage tasks. The SuperBook project followed a classic user-centered design process but it points to several difficulties hypertext interface designers face in educational environments. First, empirical trials are expensive. Even where cheap prototypes can be utilized, locating and training representative users for evaluation purposes and analyzing the subsequent data is not cheap and thus frequently resisted by design teams. Second, the problems identified in the user trials for SuperBook were not complex (e.g., response rate, poorly formulated search criteria, etc.). With hindsight these seem obvious, and there is certainly an established literature on the effect of such variables on human performance with computers, yet none of this highly talented design team predicted them. This is the norm and will occur in all design processes until we have theories of the user or learner that can predict such responses to the interface.

The McKnight et al (1992) work demonstrated the importance of understanding a particular learning task and creating a hypertext environment that matched the learners' needs and expectations of information structure. These were elicited through interviews with potential users. Readers have expectations of structure in a range of paper document types, and removing these in some misguided attempt at liberating the reader is likely to prove detrimental to usage. Furthermore, the design of this hypertext application took on board the relevant design findings on interface variables and user performance (for example on screen size, image polarity and manipulation facilities), demonstrating the applicability of this work to design practice.

By conceptualizing the technology rightly as a support element of the learning environment and placing emphasis on ensuring usability from the learner's perspective, ergonomic analysis enables education to proceed along appropriate lines where the learner, the educator and the technology (or learning environment) form a work or educational system. The designing of the technology forms just one important part of the total system and it is not necessarily one that requires or is even well-served by an educational theory. However, while this perspective casts doubt on many of the beliefs surrounding hypertext and education that are in vogue, it also suggests that the design of usable pedagogic environments is a theoretically impoverished area. User-centered design philosophies are rarely methods and the need for data-driven design remains with us for all interactive tools at this time. In the follwing section a framework is proposed that can help the design team to think about hypertext in a user-centred (and non-mythical) fashion at the earliest stages, hopefully ensuring better quality prototyping and evaluation planning, thereby lessening the risk of gathering data on usability that disappoints or fails to inform re-design.

A Framework of Reader-Document Interaction

Over the course of several long-term projects, readers' and learners' interactions with documents have been studied. The documents studied have been both paper and electronic (including a range of hypertext forms) and of several types (academic, professional, technical etc.). In the course of such work (see McKnight et al 1991, 1993 and Dillon, 1994) I have studied readers' behavior and concurrent verbal protocols while using documents to perform numerous tasks. This work has led to the proposal of the following descriptive framework. The framework (known as TIMS for Tasks, Information model, Manipulation facilities and Standard reading) is intended to be an approximate representation of the human cognition and behaviour central to the reading or information usage process. It consists of four interactive elements that reflect the primary components of reading at different phases.. They are:

    1. A Task Model (T) that deals with the reader's needs and uses for the material;

    2. An Information Model (I) that provides a model of the information space;

    3. A set of manipulation skills and facilities (M) that support physical use of the material;

    4. A Standard Reading Processor (S) that represents the cognitive and perceptual processing involved in reading words and sentences.

Fig. 1. The TIMS framework for describing human-text usage

These are interrelated components reflecting the cognitive, perceptual and psychomotor aspects of reading in any given context. According to this framework, document usage or reading is not a matter of merely scanning words on a page or acquiring and/or applying a representational model of the text's structure but a product of both these activities in conjunction with manipulating the document or information space and defining and achieving goals (all within a certain context). So for example, a reader recognizes an information need, formulates a method of resolving this need, samples the document or information space appropriately applying her model of its structure and their task, manipulates it physically as required and then perceives (in the experimental psychological sense) words on text until the necessary information is obtained.

Less serially, shifts may occur at all times between elements depending on one's purpose and the variable determinants of information being gained (or not) e.g., altering one's reading goal in the light of new information or modifying one's initial information models to take account of new experiences with the information space. The TIMS components are the building blocks of the activity described as reading or information usage which can be combined in numerous permutations. Each of these elements and their various interactions are described in more detail in Dillon (1994) but in the present chapter I will briefly outline how such a framework can aid our thinking on all matters hypertextual.

Applications of the framework in the user-centered design of hypertext

The framework is intended to support several uses. A designer (and I use the term here to include academics concerned with developing teaching materials as well as traditional software design teams) can use it simply as a checklist to ensure that all important components of the text under design are considered. This guards against the reliance on research findings at one level to ensure good design (e.g., just following the advice on visual ergonomics which concludes that certain fonts, polarity and resolution variables can overcome the reading speed deficit). While such advice might be pertinent and applicable, the framework would suggest that it is but one part of the problem of designing usable hypertext.

Second, it could be used to guide design by allowing a designer to conceptualize the issues to be dealt with in advance of any specification or prototype. In this sense it enables the hypertext designer to organize his/her thoughts on the problem and highlight attributes of the specification that need to be considered. If this leads to significantly more appropriate first specifications or prototypes, lessening the number of iterations required and thereby reducing the time and costs involved in design, it will serve a particularly useful purpose.

Third, the framework supports the derivation of predictions about readers' performance with a document. Dillon (1994) highlights its potential value as a predictive tool for a human factors practitioner, adequately familiar with the research in this area, to predict the type of problems a reader will face using an electronic document. The framework can support the derivation of predicted reading behaviour through the analysis of the various elements and their manifestation or support in the relevant designs.

Finally, the framework has potential evaluative applications. It could be used to guide expert evaluation of a system under development (i.e., a usability assessment) and support troubleshooting for weaknesses in design. This proposed use is not unlike the first use outlined above except it occurs at a different stage in the design process and is intended to support reasoned examination of the quality of an instantiated design. In this role, one could imagine a designer using the framework to check how the system rated on variables such as image quality, the information model it presents, the type of tasks it will support or manipulations it enables.

A concrete example or two will draw these potential applications out more clearly. Consider a design team addressing the development of a new hypertext application to support the teaching of critical reading. Rather than buying into any of the myths that permeate the field, the TIMS framework would suggest analysis of the task first. Thus the intended goal of the system (to support critical reading skill development) is reformulated operationally (into say, comparing and contrasting sections of text by same or different authors). Once we have a clear idea of the nature of the users' interactions with the system we can proceed to move through the other components of TIMS, examining in turn, the likely information structures the users already possess for this text type, the constraints such structures might place on organizing or linking information, onto the best form of manipulation facilities required for such tasks, until finally we consider the ergonomic principles of screen design that affect reading text. All this can be performed in principle before a single line of code is produced (though of course in reality one would advocate rapid iterative prototyping on some of these issues) and could certainly lead to the identification of issues for which no good answer exists and thus should be investigated further before design commitments are made.

At the other end of the design spectrum we can consider an application of the framework in the evaluation stage. Despite the undoubted value of full field trials of new technology, the reality of most design scenarios requires that expert or heuristic evaluations are performed by a human factors specialist (or even the designers). In this case, TIMS offers a cohesive framework from which to judge the ergonomic practicalities of a hypertext. It indicates that in any context of use, the evaluator needs to consider all four components of the framework in forming an opinion of an artifact's usability. Thus, it would be deemed insufficient merely to consider the screen ergonomics in terms of image quality or conformance to style guidelines, or to judge that for say, typical students, a mouse is a better manipulation device than function keys and so forth. Instead, the evaluator is encouraged to consider task specifics such as how and why the application will be used, and the structure of the information in terms of its support for navigation. In making one's views on all components explicit, a more complete heuristic evaluation is enabled which at least touches on all issues that have been shown to affect user performance with a system.

Dillon (1994) has employed the framework in making explicit predictions about user performance in specific usage scenarios. For example, given two forms of accessing specific information (paper or hypertext), likely user behaviour was described by tracing activities for the task through the TIMS components and indicating likely areas of advantage or disadvantage for either medium. The empirical results were a good fit to the predictions.

Thus with such ergonomic perspectives it is intended that designers are in a position to address basic design issues in their attempt to develop usable technological artifacts. No claims are made for hypertext with this perspective beyond those which have been empirically demonstrated and the discussion of the user in this context makes no reference to association, exploration of vast knowledge stores, linearity or non-linearity. One could invoke them but using TIMS neither depends on nor encourages such speculative discussion.

So where is the learning?

Hypertext does, and will continue, to influence our creation and use of information. In that sense it must have an impact on education and learning activities. However, it is still only a technology, and one that stores, manipulates and presents information which could be presented in other ways. It may be more compact, support faster retrieval, allow greater manipulation and so forth but it is still an information presentation medium.

To believe that any new technology offers us the means of solving our educational problems is to buy into the new technocracy. This new technocracy is not really very different from the older technocracies of Skinner Boxes and CAL terminals (though it's theoretical cohesion might certainly be weaker). Yet hypertext can play a part if designed and used appropriately. What I am arguing for here is a re-orientation of our perspectives so that we proceed logically and to the best of our abilities. That means we do not equate learning with the simple process of interaction with a machine but see it as a process involving information access and use. In designing technology for such access and use we do not need recourse to educational theory as it now stands but to the principles of ergonomics and user-centered systems design.

Educational theory subsequently plays a part in understanding what it is about the use of such well-designed machines that we can exploit in creating learning contexts where educators and students exploit the technology's information handling and presentation qualities. The ergonomic and educational issues are different, located at different levels of analysis but intertwined when we seek to understand just what it is that hypertext may offer in pedagogical contexts. The onus is on developers of hypertext systems to produce a technology that supports users' learning tasks (clearly articulated in operational terms), emphasizing usability (the effectiveness, efficiency and satisfaction of use according to emerging international standards for software: ISO 9241, Part 11) in the first instance rather than any increase in learning. Learning may be enhanced by certain technological inputs to the educational context but only if the technological input is usable and not solely as a result of that technology's presence but as a result of judicious use and careful support from the teacher or trainer. The latter issue is most appropriately addressed by educational theorists.

There is no need for educators or trainers to throw away established learning principles or teaching paradigms because of this. This is not to say the established principles are right (indeed much of educational theory is particularly poor at predicting learning outcome) but that the need for change should not be technologically but empirically and ultimately theoretially determined. Our understanding of learning needs to develop on the basis of our experimental work and theoretical developments at the human rather than machine level. Furthermore, we must be clear that informing design with such theories and data requires more than just a simple cross-disciplinary reading of the literature. Bridging representations are required to enable such transfer. The framework described here is an attempt to provide one such bridge and aid hypertext developers to address the fundamental user issues appropriately before attempting any introduction of advanced technology to the learning context.

Footnotes

(1) Witness what I shall term here, the 'Maastricht phenomenon'. Recent reports show that more than 9 million words were written on this treaty over the last 12 months in the English press. Yet, when asked 'what is Maastricht?' (it is in fact the name given to treaty amongst states in the European community named after a small Dutch town) most British citizens (who claimed to read newspapers) could not say! However, ever willing, some members of the public proffered their views. Some of the more interesting answers suggested Maastricht was either a sexually transmitted disease or "something to do with the ozone layer".

(2) The term Ergonomics refers to the multidisciplinary study of humans in working environments, with particular emphasis on the design of technology to support human performance. In the USA, this work is more often termed Human Factors although a shift is occurring towards a more general acceptance of the European term. In some sense, neither term is particularly satisfactory in describing the perspective outlined here which necessarily embraces some perspectives from information science and possibly instructional design.

Acknowledgements

I would like to thank my fellow editors, Dr. Peter Foltz of LRDC, Pittsburgh and Prof. Blaise Cronin of SLIS, Indiana University for comments on earlier drafts that led to significant improvements in the present chapter.

References

Ambron, S. (1988) What is multimedia? In: S. Ambron and K. Hooper (eds.) Interactive Multimedia, Redmond WA: Microsoft Press.

Anderson, J. (1983) The Architecture of Cognition. Cambridge MA: Harvard University Press.

Barrett, E. (1988) Text, Context and Hypertext. Cambridge: MIT Press.

Beeman, W., Anderson, K., Bader, G., Larkin, J., McClard, A., McQuillan, M. and Shields, M. (1987) Hypertext and pluralism: from lineal to non-lineal thinking. In: Proceedings of Hypertext '87. University of North Carolina, Chapel Hill, 67-88.

de Beaugrande, R. (1980) Text, discourse and process, Norwood NJ: Ablex.

Bush, V. (1945) As we may think. Atlantic Monthly, 176(1), 101-108.

Charney, D. (1987) Comprehending non-linear Text: the role of discourse cues. In: Proceedings of Hypertext '87. University of North Carolina, Chapel Hill, 109-120.

Collier, G. (1987) Thoth-II: hypertext with explicit semantics. Proc. of Hypertext'87, University of North Carolina, Chapel Hill, 269-289.

Cunningham, D., Duffy, T. and Knuth, R. (1993) The Textbook of the Future. In C. McKnight, A. Dillon and J. Richardson (Eds.) Hypertext: A Psychological Perspective, Chichester: Ellis Horwood, 19-50.

Cushman, W. H. (1986) Reading from microfiche, VDT and the printed page: subjective fatigue and performance. Human Factors, 28(1), 63-73.

Delany, P. and Gilbert , (1992) In: Delaney and Landow, G. (Eds.)Hypertext and Literary Studies. Cambridge MA : MIT Press.

Delaney, P. and Landow, G. (Eds.) (1992) Hypertext and Literary Studies. Cambridge MA : MIT Press.

Dillon, A. (1991a) Requirements analysis for hypertext applications: the why, what and how approach. Applied Ergonomics, 22(4) 458-462.

Dillon A. (1991b) Readers' models of text structures: the case of academic articles International Journal of Man-Machine Studies 35, 913-925.

Dillon, A. (1992) Reading from paper versus screens: a critical review of the empirical literature Ergonomics: 3rd Special Issue on Cognitive Ergonomics, 35(10) 1297-1326.

Dillon, A. (1994) Designing Usable Electronic Text: Ergonomics Aspects of Human Information Usage. London: Taylor and Francis

Dillon, A., Richardson, J. and McKnight, C. (1993a) Space -the final chapter: or why physical representations are not semantic intentions. In C. McKnight, A. Dillon and J. Richardson (Eds.) Hypertext: A Psychological Perspective. Chichester: Ellis Horwood, 169-192.

Dillon, A., Richardson, J. and McKnight, C. (1989) The human factors of journal usage and the design of electronic text. Interacting with Computers, 1(2) 183-189.

Dillon, A., Sweeney, M. and Maguire, M. (1993b) A survey of usability evaluation practices and requirements in the European IT industry. In. J. Alty, S. Guest and D. Diaper (eds.) HCI'93. People and Computers VII. Cambridge: Cambridge University Press.

Girill, T. and Luk, C. (1983) Document: An interactive online solution of our documentation problems. Communications of the ACM, 26(5) 328-337.

Girill, T. Luk, C. and Norton, S. (1987) Reading patterns in online documentation: How transcript analysis reflects text design, software constraints and user preferences. In Proc. of 34th International Technical Communications Conference, Washington, DC: STC. 111-114.

Gould, J.D., Alfaro, L., Barnes, V., Finn, R., Grischkowsky, N. and Minuto, A. (1987a) Reading is slower from CRT displays than from paper: attempts to isolate a single variable explanation. Human Factors, 29(3) 269-299.

Hammond, N. (1993) Learning with Hypertext: Problems, Principles and Prospects. In: In C. McKnight, A. Dillon and J. Richardson (Eds.) Hypertext: A Psychological Perspective. Chichester: Ellis Horwood, 51-70.

Horney, M. (1993) A measure of hypertext linearity. Journal of Educational Multimedia and Hypermedia, 2(1), 67-82.

Jonassen, D. (1982) The Technology of Text. Vol I. Principles for Structuring, Designing, and Displaying Text. Englewood Cliffs NJ: Educational Technology Publications.

Jonassen (1989a) Mappring the structure of content in instructional systems technology. Educational Technology, 29(4).

Jonassen, D. (1989b) Hypertext/Hypermedia. Englewood Cliffs: Ed. Tech Publications.

Jonassen, D. (1993) Effects of semantically structured hypertext knowledge bases on users' knowledge structures. In: C. McKnight, A. Dillon and J. Richardson (eds.) Hypertext: A psychological perspective. Chichester: Ellis Horwood.

Just, M.A. and Carpenter, P. (1980) A theory of reading: from eye movements to comprehension. Psychological Review, 87(4), 329-354.

Johnson-Laird, P. (1988) A computational analysis of sonciousness. In, A. Marcel and E. Bisiach (Eds.) Consciousness in Contemporary Science. Oxford: Clarendon Press, 357-368.

Klein, L. and Eason, K. (1991) Putting Social Science to Work. Cambridge: Cambridge University Press.

Landauer, T. (1991) Let's get real: a position paper on the role of cognitive psychology in the design of humanly useful and usable systems. In: J. Carroll (ed.) Designing Interaction: Psychology at the Human-Computer Interface, Cambridge: Cambridge University Press, 60-73.

Landauer, T. et al (1993) Enhancing the usability of text through computer delivery and formative evaluation. In: C. McKnight, A. Dillon and J. Richardson (Eds.) Hypertext: A Psychological Perspective. Chichester: Ellis Horwood.

McKnight, C., Dillon, A. and Richardson, J. (1990) A comparison of linear and hypertext formats in information retrieval. In: R. McAleese and C. Green (Eds.) Hypertext: State of the Art. Oxford: Intellect, 10-19.

McKnight, C., Dillon, A. and Richardson, J. (1991) Hypertext in Context. Cambridge: Cambridge University Press.

McKnight, C., Dillon, A., Richardson, J. (1992)Project CHIRO: Collaborative Hypertext in Research Organisations. British Library Research Report. The British Library, London.

Megarry, J. (1988) Hypertext and compact disks. British Journal of Educational Technology, 19(3) 172-183.

Nielsen, J. (1990) Hypertext and Hypermedia. London: Academic Press.

Nelson, T., (1987) Literary Machines. Abridged Electronic Version 87.1 San Antonio: Ted Nelson.

Pugh, A. (1979) Styles and strategies in adult silent reading. In: P. Kolers, M. Wrolstad and H. Bouma (eds.) Processing of Visible Language 1. London: Plenum Press.

Shackel, B. (1991) Usability -context, framework, definition, design and evaluation. In: B. Shackel, and S. Richardson (eds.) Human Fators for Informatics Usability, Cambridge: Cambridge University Press.

Small, R. and Grabowski, B. (1992) An exploratory study of information seeking behaviours and learning with hypermedia information systems. Journal of Educational Multimedia and Hypermedia, 1(4) 445-464.

Whalley, P. (1993) An alternative rhetoric for hypertext. In: C. McKnight, A. Dillon and J. Richardson (eds.) Hypertext: A Psychological Perspective. Chichester: Ellis Horwood.

Wilkinson , R.T. and Robinshaw, H.M. (1987) Proof-reading: VDU and paper text compared for speed, accuracy and fatigue. Behaviour and Information Technology, 6(2), 125-133.