TIMS: a framework for the design of usable electronic text
This item is not the definitive copy. Please use the following citation when referencing this material: Dillon, A. (1996) TIMS: A framework for the design of usable electronic text. In H. van Oostendorp and S. de Mul (eds.) Cognitive Aspects of Electronic Text Processing. Norwood NJ: Ablex, 99-120.
Despite the claims and the promises, the hype and the visions, the reality of electronic text is far less impressive than the rhetoric that surrounds it. Internet, World Wide Webs, MOSAIC, e-journals, word processors, and of course, hypertext are all pushed forward as examples of this triumph of technology, this liberation of the human reader and writer, this future of unlimited information for everyone. Yet, for all this, as has been outlined in detail elsewhere (see e.g., Dillon 1994), the typical reader of an electronic information source will likely suffer loss of orientation, lower reading speeds, and possibly greater fatigue than the typical reader of a paper document for few demonstrable benefits.
Why is this and how can cognitive scientists help overcome these basic human engineering problems in technological artifacts? In the present chapter I will seek to answer these questions through the presentation of a framework for analyzing reader-document interaction that has been developed over the last 5 years through my work with colleagues at the HUSAT Research Institute in the UK and SLIS at Indiana University in the USA. This framework, referred to as TIMS (for Task, Information, Manipulation and Standard Reading), is an attempt at conceptualizing human information usage in a form suitable for considering and evaluating design options for electronic documents. In use it can also support interpretation of findings on reading electronic text and help generate testable hypotheses of human performance with a system. It has been developed from, and subsequently tested and refined on real artifact design scenarios.
Reading electronic text: some basic human factors
It is essential when addressing issues of human reading in electronic environments that we ground ourselves in the empirical findings rather than the fantasy of the information revolution. Contrary to the populist image of novice users navigating effortlessly through cyberspace in pursuit of interlinked masses of relevant information, experimental work to date has demonstrated differences between the electronic and paper media in reading at the psychomotor, perceptual and cognitive levels. Put simply, these differences are both process and outcome based and have largely debunked the notion of hypertext or hypermedia as a liberating technology. Dillon (1992) outlined empirical evidence indicating differences in speed, accuracy, comprehension, fatigue, preference, navigation and manipulation between paper and electronic media. Without delving deeply into a literature that has been reviewed in detail in the above mentioned paper, suffice it to say that at best, reliable and valid evidence for the advantages of hypertext only emerge in limited task domains and when the technology has been adequately and iteratively designed in a user-centered fashion. In short, most documented initial (theory-based) attempts at improving upon paper have failed and improvements have only been gained through evaluation and re-design (empiricism) (see e.g., Landauer et al 1993).
In recognizing this, the intention is not to downplay or reject the potential value of these new tools but to advocate a move beyond the exaggerated and unsubstantiated claims that were the hallmark of the recent (last decade) upsurge of interest in electronic text processing to a more enlightened view of reading and general information usage on the part of humans. For sure, the technological determinism remains, often in a cunningly disguised form (see e.g., most of the educational literature on hypermedia) and proponents keep telling us how wonderful is the new medium without reference to the empirical data that contradicts them. But technocracy aside, cognitive scientists (a collective term in which I include most social and behavioral scientists working in this domain) are now in a position to shape this technology, armed not only with the strong empirical evidence, but an informed sense of what is accomplishable with good design practice.
User-centered artifact design
By good design practice is meant a user-centered approach to artifact creation which can be characterized as iterative, evolving through repeated prototyping and evaluation cycles, with the intention of developing an application that is usable as well as functional. Of necessity, such design practice needs to identify target users and their requirements at the earliest stages, perform detailed task analyses and demonstrate an awareness of all the stakeholders in the new technology. Furthermore, such practice takes as axiomatic the contextual determinism of usability and makes explicit the user, task and environmental variables impacting a technology's exploitation in both the design and testing of the artifact. This issue has been dealt with repeatedly in the literature on HCI (Human-Computer Interaction) and will not be rehashed further here (see e.g., Shackel, 1991, Eason, 1988 and McKnight et al, 1991). However, for present purposes it is worth making explicit the meaning of usability which in this case draws on Shackel (1991) in defining it as the artifact's capability, in human functional terms, to be used easily, effectively and satisfactorily by specific users, performing specific tasks, in specific environments.
The essence of this definition is that it avoids two of the most common problems associated with this concept. Firstly, it explicitly places usability at the level of the interaction between users, performing tasks in certain environments, and the artifact. This takes it beyond the typical features-based definitions common in the field e.g., where usability is equated with the presence of menus and a mouse or conformance with certain standards (as embodied in most style guidelines). Clearly, a moments reflection in the light of the above definition will make it clear why such an index of usability is meaningless.
Second, it operationalises, and thereby provides us with a means of evaluating, an artifact's usability. The HCI field is rife with what may be termed semantic rephrasing as opposed to definitions of usability e.g., usability equals 'user-friendliness', 'ease of use' etc. Such non-definitions skirt the issue and provide little if any guidance to those attempting an evaluation or seeking to develop a prototype.
Applied social/cognitive science practice in this domain is frequently criticized for its putative lack of coherence as a body of knowledge, the poor generalizability of findings, and most commonly, its evaluative rather than predictive nature (see Chapanis, 1988 for a discussion of these issues in general ergonomics work). Most ergonomists or cognitive scientists involved in the design of artifacts could provide numerous examples of being brought into the design and development process only as an evaluator of a near complete design, often only to identify problems that are both too costly to correct at a late stage and which they believe they could have identified and predicted much earlier if only they had been consulted. An essential tension exists however between the nature of our knowledge in the social sciences and our desire to influence design (in large part an engineering discipline) earlier since the absence of applicable theoretical models of human task performance in realistic domains and the real power of our evaluative skills suggests to engineers and software designers that our knowledge is best applied after theirs, not concurrently and certainly never before.
In advocating a contextually-determined view of usability it can be seen that establishing the context of use that is intended in the design of a new artifact is an essential first step in usable product design and thereby pushes the cognitive scientist to the very earliest stages of design, the conceptualization stage, to perform user, task, and environmental analysis. In recent years, there has been a growing acceptance in principle, if not so much in practice, of designing in user-centered rather than product-centered manner and considering these user issues at the very outset of the design process.
While a user-centered approach (properly executed and coupled with the sophisticated prototyping tools currently emerging) makes the development of usable technology more likely, it remains a non-optimum process which can prove extremely expensive in terms of time and resources. The quality of the original prototype is always dictated by the accuracy or validity of the designer's conceptualization of the intended users and their tasks (the term designer is here used in the singular for convenience though in practice much design is a team effort and is often performed by people who would not necessarily describe themselves as 'designers'). In the domain of information usage, such conceptualizations are often very weak. The user is seen as "an information processing being with five modalities" (see e.g., Norman 1986) and sometimes stringent design laws are laid down on totally spurious interpretations of experimental psychological research e.g., the argument that cognitive psychology 'proves' that menus on a computer should never exceed 7+/- 2 items (see e.g., Dyer and Morris 1991) Yet the decisions taken at this point are likely to have major ramifications for the product. A primary task for cognitive science therefore is the improvement of such articulations of basic science so that better first designs can be developed.
The Nature of Theory and Design
A distinction shall be drawn here between theory in design and theory for design. In the former, it is proposed that the artifacts we create are embodiments of a designer's views of the users and the work they perform. The artifact is thus a conjecture on the designer's part and like all conjectures, is, through the course of its development and its life, open to refutation (either formally in a usability evaluation or practically through real-world rejection of the product). This perspective can prove useful in understanding the nature of the design process and the means by which social scientists may study and influence it (this perspective is discussed in detail in Dillon 1994, see also Carroll and Campbell 1989).
The latter issue, theory for design, is one that concerns many cognitive scientists interested in the design of more humanly acceptable technologies. It is the search for some form of representation of cognitive science knowledge and concepts that can be utilized by designers in their efforts. In one sense the theory in design must subsume the theory for design as part of its own raison d'etre but this is not necessarily the case since the latter would exist even if the former is considered incorrect or inappropriate as a conceptualization. The present chapter is concerned primarily with theory for design but has been influenced in that work by an acceptance of the theory in design perspective.
The most relevant disciplines to provide theories for reading and information usage are arguably cognitive psychology, educational psychology and information science. Indeed, from its outset, psychology viewed reading as an almost perfect experimental task and one whose explanation would represent a triumph for the discipline (Huey 1908). Educational psychology has a primary interest in the design of artifacts to enhance learning, much of which involves the learner interacting with a document or related information source. Information science is concerned with the storage and retrieval of all information types though has often had little to say about the human use of that information once accessed. Despite the concerns, we are not in a position to transfer models and theories easily from any of these disciplines to the design community concerned with developing electronic document systems. Even where practitioners in each of these disciplines become designers themselves, their ability to draw on their native fields in creating an artifact is necessarily limited and at best, weakly articulated.
This should not be taken as a criticism of any discipline. Cognitive psychology, for example does not exist to predict and explain human performance with interactive technology and to accuse it of irrelevance to design would be to assume it ever claimed such human activities as part of its remit (though design might be a logical application domain). Landauer, (1991), has called for a re-assessment of the value of cognitive psychology in design but his point seems to be addressed at social scientists and designers who place too much value in the role of this discipline. Landauer's critique is not intended, as some have suggested, to be an attack on the discipline itself, like Kline's (1988) detailed criticism of most contemporary psychology, but a call for a more appropriate use of relevant methods. What such disciplines do offer however are perspectives, certain tools and techniques that we might draw upon. The transfer of technology however from one domain (e.g., laboratory studies of word recognition) to another (e.g., the specification of dialogue terms in a software application) is rather more complex than might be imagined (and certainly was envisaged in the early work on HCI) but this should not lead us to dismissal of the providing discipline. However, we ought to be aware that what be glean from academic disciplines is likely to take a very different shape when utilized in design contexts. In developing such bridging representations it is inappropriate to apply criteria we use in the academic laboratory, since many of them (sociological or psychological accuracy, statistical significance, unambiguous explication) are less likely to hold sway in the design community which seems to be more concerned with usefulness, option constraining, and informing designers what to do.
It is the case however, that models of reading proposed in these contributing disciplines are not directly usable in design since they are both limited in range of tasks covered and proposed at a level that seems inappropriate to the design world (see e.g., some of the classic cognitive science work on reading text such as Just and Carpenter (1980) and consider what design constraints such findings suggest that are beyond common sense). This has left the HCI community to engage in a strongly empirical approach to uncovering aspects of reading electronic text that are important (see e.g., the program of research undertaken on proofreading from VDUs at IBM by Gould et al. 1987). Yet even such an approach remains far from the norm. All too often, theoretical perspectives are drawn upon after the fact to rationalize a design perspective (see e.g., Cunningham et al 1993 on the contructivist embrace of the Intermedia system) or worse, weak or flawed empiricism is assumed sufficient.
In developing TIMS, an explicit representation was sought of the issues that emerged repeatedly in design contexts involving a variety of hypertext and other electronic documents. These included academic journals, software manuals, process handbooks, and research project archives. The intention of the framework is to provide those developing electronic text to conceptualize the human cognitive, perceptual and psychomotor factors demonstrably influencing the usability of the created artifact. These factors were derived from several years of experimental investigations of human information usage from an ergonomics perspective. However, the framework is architecture-independent and is neither intended or articulated as a model of human cognition but as a design framework to support more adequately the articulation of the pertinent human factors in design.
TIMS is predicated on the following assumptions of human information usage:
(i) Humans read and use information in a goal-directed manner to 'satisfice' the demands of their tasks
(ii) Humans form models of the structure of and relationship between information units.
(iii) Human information usage consists in part of physical manipulation of information sources
(iv) Human reading at the level of word and sentence perception is bounded in part by the established laws of cognitive psychology
(v) Human information usage occurs in contexts that enable the user to apply multiple sources of knowledge to the problem in hand.
These predicates will become clearer as the chapter proceeds. In form TIMS is a qualitative framework and is proposed for use as an advanced organizer for design, as a guide for heuristic and expert usability evaluation, and as a means of generating scientific conjectures about the usability of any electronic text. More on these intentions will be presented later though full details are provided in Dillon (1994). In the following section, a brief overview of the framework is presented.
The TIMS framework
The framework is intended to be an approximate representation of the human cognition and behaviors central to the reading process that are employed in the interaction between reader and document. It consists of four interactive elements that reflect the primary components of the reading situation at different phases in any context. They are:
These are all interrelated components reflecting the cognitive, perceptual and psychomotor aspects of reading. Reading or more generally, human information usage is thus conceptualized not as a matter of scanning words on a page alone (as in some narrow psychological models of reading) or acquiring and applying a representational model of the text's structure but a product of both these activities in conjunction with manipulating the document or information space and defining and achieving goals and forming strategies for their attainment (all within a certain context). So for example, a reader recognizes an information need, formulates a method of resolving this need, perceptually and cognitively samples the document or information space by appropriately applying her model of its structure, manipulates it physically as required and then perceives (in the experimental psychological sense) words on text until the necessary information is obtained. Obviously this is a very simple picture of the reading process; other more complex scenarios are possible such as the revision of one's reading goal in the light of new information or modifying one's initial information models to take account of new details and so forth. Each of these elements is described in more detail in the following section.
The Task Model (TM)
The reading task is the crucial factor in understanding text use and the only reasonable basis on which electronic text design can be investigated. Readers interact with texts purposively, to obtain information, to be entertained, to learn etc. To do this they must decide what it is they want to get out of the text, determine how they will tackle it (e.g., browse or read start to finish, follow a link or ignore it for now etc. ). Furthermore, during the task they must review their progress and, if necessary, revise the task.
This notion of intentionality in reading gives rise to the idea of planning in the reader's mind. It seems from evidence collected by the author (Dillon 1994) that such planning is relatively gross, taking the form of such intentions as 'go to the index, look for a relevant item and enter the text to locate the answer to my query' or 'to find out what statistical tests were used go to the results section and look for a specific description'. However, plans can be much vaguer than these two examples which probably represent highly specified plans of interaction with the text. Reading an academic article to comprehend the full contents seems to be much less specifiable, the reader is more likely to formulate a plan such as 'read it from the start to the finish, skip any irrelevant or trivial bits, and if it gets too difficult jump on or leave it'. Furthermore, such a plan may be modified as the reading task develops e.g., the reader may decide that she needs to re-read a section several times, or may decide that she can comprehend it only by not reading it all. In this sense planning becomes more situated (see e.g., Suchman, 1988) where the reader's plans are shaped by the context of the on-going action and are not fully specifiable in advance.
The Information Model (IM)
Readers possess (from experience), acquire (while using) and utilize a representation of the document's structure that may be termed a mental model of the text or information space. Such models allow readers to identify likely locations for information within the document, to predict the typical contents of a document, to know the level of detail likely to be found and to appreciate the similarities between documents etc. Indeed several years of experimental work suggest this must be the case (see also van Dijk and Kintsch (1983).
The present author follows Brewer's (1987) distinction here between 'global' and 'instantiated' schemata with regards such models. In the present context a global schema consists of a representation of how a typical text type is organized e.g., a newspaper is made up of a series of articles covering a range of topics grouped into sections on politics, sport, finance etc. These structural representations are general and exist independently of any specific document (though of course they only emerge over time after frequent interactions with many documents). An instantiated schema consists of an embodiment of the generic model based on exposure to a specific text, e.g., noting that the particular article one is reading has a very short introduction or there is a diagram of statistical results on the bottom of a left-hand page. In other words, when a reader interacts with a text, the original structural model of the text type becomes fleshed out with specific details of the particular text being read.
This distinction will be referred to here more simply as the difference between a model (which is generic) and a map (which is specific). In these terms readers can be said to form mental maps of particular texts as they use them, models help them in this but are not themselves essential for map formation (i.e., it is assumed that a reader can form a detailed map of a document without having been exposed to similar types of text before). In this way, frequent map formation with a document type can be seen as supporting model formation of that document type's generic structure.
In use, the information model helps the reader to organize the text's contents by fitting it into a meaningful structure and thus guards partly against navigational difficulties by providing context i.e., it supports the formation of a mental map of the information space. Thus what is initially a model becomes, with use, a map of a specific text. Where no model exists in advance, it is hypothesized that a map can be formed directly (though it may require more effort on the part of the reader, an empirical issue to be investigated). Similarly, where repeated map formation suggests regularities for a text type, then it is hypothesized that a model may be formed. The point at which a model becomes a map (and the opposite case just outlined) is difficult to quantify and probably not pertinent to present needs but offers a potentially interesting research problem to pursue.
Manipulation Skills and Facilities (MSF)
Most documents are more than one page or screen in length and thus the reader must be able to physically manipulate the text. With paper, such skills are usually acquired early in life and are largely transferable from one text form to another. If you can manipulate a paperback novel you will have few difficulties with a textbook and so forth, although there are obvious exceptions in the paper domain and the ability to easily manipulate broadsheet newspapers in confined spaces is a specific skill that is relatively unique to that text form.
On the other hand, the manipulations available with paper are limited in terms of what you can do with the text. Most readers can use their fingers to keep pages of interest available while searching elsewhere in the document or flicking through pages of text at just the right speed to scan for a particular section, but beyond these actions, manipulation of documents becomes difficult. When one then considers manipulation of multiple documents these limitations are exacerbated.
Large electronic texts are awkward to manipulate by means of scrolling or paging alone but the advent of hypertext with its associated 'point and click' facilities and graphical user interface qualities has eased this, at least in terms of speed. Yet, the immediacy or intimacy of interaction with electronic text is less than with paper by virtue of the microprocessor interface between reader and information on screens. Furthermore, the lack of standards in current electronic information systems means that acquiring the skills to manipulate documents on one system will not necessarily be of any use for manipulating texts on another. Obviously electronic text systems afford sophisticated manipulations such as searching which can prove particularly useful for certain tasks and render otherwise daunting tasks (such as locating thematically-linked quotations from the bible) now manageable in minutes rather than days. Yet electronic search facilities are far from a guarantee of accurate performance.
The various advantages and disadvantages of manipulation facilities on screens have been discussed in the literature (see Dillon, 1994). Ultimately, the goal is to design transparent manipulation facilities that free the reader's processing capacity for task completion. Slow or awkward manipulations are certain to prove disruptive to the reading process. The framework raises these issues as essential parts of the reading process and therefore important ones for designers to consider in the development of electronic text.
Standard Reading Processor (SRP)
The final element of the framework is the standard reading processor. It is the element that perceives the document and performs out the activities most typically described as 'reading' in the psychological literature (e.g., Just and Carpenter 1980). Thus eye movements, fixations, letter/word recognition and other perceptual, linguistic and (low-level) perceptual and cognitive functions involved in extracting meaning from the textual image are properly located at this level.
Originally, I had termed this component "serial" rather than "standard" since I sought to emphasize the experimental evidence that reading generally occurs in a serial fashion and to counter some of the mythical claims of technocrats that hypertext liberated reader from the constraints of serial reading (see e.g., Nielsen, 1990). However, this is likely to cause more confusion than the prefix 'standard' which seeks to convey a sense both of typicality in execution and of the level of reading analysis found in standard psychological investigations. At the level detailed addressed here, information extraction from a document relies on the reader processing letters, words and sentences in a form that might be considered explicable in the standard psychological literature.
Decades of psychological investigation have been spent looking at the question of how humans read and some of the conclusions drawn from this work have been important in understanding the persistent speed deficits noticed in proofreading from screens (Gould et al 1987b). Present emphasis dictates that the findings on eye movements, reading speeds, letter and word recognition etc. are considered sound and offer a physical ergonomic requirement on the standard of image quality required to at least match that of paper.
So far, the basic components of the framework have been described. The TIMS framework should not be considered the equivalent of a cognitive model of reading such as van Dijk and Kintsch (1983) or Garnham (1987) but a framework intended to reflect the human aspects of performance during the reading process. The elements are those human factors that seem pertinent to electronic text design. A schematic representation of the framework is presented in figure 1. As shown, the elements are all related and collectively framed within the context in which the activity occurs.
Fig. 1 The TIMS framework
In practice, it is not hypothesized that inter-element interactions occur in isolated units. Meaningful engagement with a document is more likely to result in multiple rapid interactions between these various elements. For example, a scenario can be envisaged where, reading an academic article for comprehension, the task model interacts with the model to identify the best plan for achieving completion. This could involve several TM -> IM and IM->TM interactions before deciding perhaps to serially read the text from start to finish. If this plan is accepted then manipulation facilities come into play and standard reading commences. The MLS->SRP interaction and SRP->MLS interaction may occur iteratively (with occasional SRP-> IM interactions as distinguishing features are noted) until the last page is reached at which point attention passes back ultimately to the TM to consider what to do next.
Also, the speed and the iterative nature of the interaction between these elements is likely to be such that it is difficult to demonstrate empirically the direction of the information flow. In many instances it would be virtually impossible to prove that information went from MSF to IM rather than the other way and so forth. However this does not preclude examination of these elements and their interactions in an attempt to understand better the process of reading from a human factors perspective. The elements reflect the major components of reading that emerged as important from several years of studies (described collectively in Dillon, 1994) and are intended as a broad representation of what ergonomically occurs during the reading process.
Verbal Protocols and the TIMS Elements
In order to validate the elements making up the TIMS perspective of information usage, a series of verbal protocol analyses of readers interacting with both electronic and non-electronic documents were performed. The main intention of this test was to see if the elements did indeed emerge in readers' protocols and, more importantly, if readers articulated thoughts and activities that went beyond the elements described and thus would need to be included in the framework.
The general scenario for the elicitation of these protocols involved readers performing an information search task on a text describing the making of wine. Full details of the study are presented in Dillon (1994) but in the present chapter several extracts from the protocol of one reader, operating on a HyperCard version of the document are presented. The intention here is to demonstrate that the elements proposed in the TIMS framework are manifest in readers' protocols and provide a parsimonious account of reader activity (cognitively and behaviorally) for design purposes.
In the first instance, the reader's opening interactions with the HyperCard stack are described.
Question 1. What 2 subjective impressions are typically used in describing taste?
Time Comment Action
This segment represented one successful task of two and a half minutes duration. What is interesting for present purposes is the intentions and reactions present in the verbal comments. First off, the reader articulates his strategy, the means he has formulated for tackling the task. This is to go to the index and check if terms mentioned in the question are present. Two things are worth noting here. First, the strategy for tackling the task (an obvious TM activity in TIMS) is not a clear cut plan but a first pass at narrowing the task space in the hope that by making some progress, a further stage may suggest itself. As Suchman (1988) eloquently pointed out, the idea that users of interactive technologies formulate complete task plans in advance is erroneous, and real activity is much more situated i.e., governed by context.
The second point to note is that in immediately going to the Index and subsequently to the Contents section, the reader indicates knowledge of the layout of the stack and the information such parts of the document might afford. From the TIMS perspective this is a pure IM activity.
The remaining first minute of the protocol demonstrates the user's standard reading of the contents list (SRP activity) and the formulation of an appropriate tactic for making progress (TM activity) which in this case is the selection of likely section in the document based on its relationship to the key term the reader has identified in the question.
In the second minute much SRP activity is manifest as the reader reads through the selected text searching for the answer. Interestingly, he finds part of the answer and thinks it provides the complete information at first but soon realizes this is not the case and further work is required. Obviously, SRP activity quickly transferred to TM, possibly interchangeably (and for present purposes, unimportantly) before determining that a return to the Contents might be appropriate (manifestation of IM) and formulating the next stage of progress (TM activity). A further period of SRP activity leads the reader to successful task completion.
In the next section we see the rapid solution to an information task that is rendered possible by the use of the technological facilities. Here, the reader immediately formulates a strategy for handling the task, i.e., given the explicit nature of the question, a search term can be formulated that is likely to provide an immediate answer. This is effective TM activity. Interestingly, what happens in this case throws some light on a minor flaw in the HyperCard interface. The "Search Again?" window controlling the Find command appears on screen in such a place as to obscure the found term in the text. The reader engages in SRP activity for awhile before moving the window and locating the highlighted term (MSF activity). A short SRP stage follows and the reader has found the answer.
Question 2. "What term describes wines with an alcohol content of greater than 15%?"
Time Comment Action
Such tight coupling between user and technology can emerge quickly. Given his success with the search facility this reader defaults to using it immediately and by question 5 in his 12 question session he is employing it very effectively:
Question 5. What wine is made with rotten grapes?
Time Comment Action
Apart from the swiftness of the interaction, the reader also makes a passing reference to his sense of having seen this subject tackled "somewhere" in the document (IM manifestation). In a sense then this is almost a complete linear cycle through the elements (TM->IM->MSF->SRP sequence leading to answer).
For present purposes the protocols will not be examined further. What is being emphasized here is the extent to which the framework provided by TIMS offers a useful summary of the type of activities that readers perform in utilizing an artifact of this nature. Furthermore, given it may well do so, what use is intended for such a framework in design.
TIMS as Theory for Design
As a theory of the reader for design purposes, TIMS is not intended to meet stringent cognitive criteria. Indeed, as expressed initially, it is a cognitive architecture-independent representation of human information usage. However, it is posited that several distinct applications of TIMS could prove advantageous to the design world.
In the first instance the framework is useful as a guiding principle or type of advance organizer of information that gives the designer an orientation towards design enabling the application of relevant knowledge to bear on the problem. Thus when faced with a design problem requiring electronic documents, the designer could conceptualize the intended readers/users in TIMS terms and thereby guide or inform their prototyping activities.
Secondly, by parsing the issues into elements, the framework facilitates identification of the important ones to address. This framework suggest four major issues to consider in any information context: the user's task and their perception of it; the information model they possess, must acquire or is provided by the artifact; the manipulation facilities they require or are provided; and the actual 'eye-on-text' aspects involved. All are ultimately important, though depending on the task and the size of the information space, some may be more important than others (e.g., for very short texts of one screen or less, manipulation facilities and information models might be less important than visual ergonomics, although task contingencies must be considered here before such a conclusion could be reached).
In the third instance the framework provides a means for ensuring that all issues relevant to the design of electronic text are considered. It is not enough that research or analysis is carried out on text navigation (IM issue) and developers ignore image quality or input devices (and vice-versa). A good electronic text system will address all issues (indeed it is almost a definition of a good electronic text that it does so).
The above applications consider the uses at the first stages of system development. In this sense the term designer encompasses any cognitive scientists seeking to influence the specification of an application. However the framework also has relevance to later stages of the design process such as evaluation. In such a situation the framework user could assess a system in terms of the four elements and identify potential weaknesses in a design. This would be a typical use for expert evaluation, perhaps the most common evaluation technique in HCI. This form of use has been most commonly made of the framework by the present author.
Outside of the specific life cycle of a product, the framework has potential uses by human factors researchers (or professionals less interested in specific design problems) in that it could be used as a basis for studying reader behaviour and performance. The framework is intended to be a synopsis of the relevant issues in the reading process as identified in the earlier studies. Therefore, it should offer ergonomists or psychologists interested in reader-system interaction a means of interpreting the ever-expanding literature in a reader-relevant light. This is a point worth emphasizing since the transfer of tools and knowledge to users outside of the domain of development has proved a failure in much HCI work.
Certainly an immediate application is to enhance our critical awareness of the claims and findings in the literature on HCI. For example, it is not untypical to hear statements to the effect that 'hypertext is better than paper' or 'reading from paper is faster than reading from screens' TIMS suggests caution in interpreting such statements. If human information usage involves all elements in the framework then the only worthwhile statements are those that include these. Thus, the statement 'hypertext is better than paper' is, according to the TIMS perspective, virtually useless since it fails to mention crucial aspects such as the tasks for which it is better, the nature of the information models required by readers to make it better or worse, the manipulation facilities involved, or the image quality of the screen that effects the standard reading processes of the reader. We know all these variables are crucial since we have over 15 years of research investigating them and trying to untangle their varying effects.
Thus, TIMS suggests statements about reading, to be of value, need to be complete and make explicit each of the elements in its claim. Thus, a more useful statement would be: "For reading lengthy texts for comprehension, for which readers have a well developed information model, on a scrolling window, mouse-based system, screens of more than 40 lines are better than those of 20 lines, though both are worse than good quality paper"
Notice that the truth content of the statement is not what is at issue here. It is the completion of the statement that is important. Incomplete statements (ones not making reference to all elements of the TIMS framework) are too vague to test since there are unlimited sources of variance that could exist under the general headings "hypertext", "paper" or "is better than". A complete statement might be wrong, but at least it should be immediately testable or refutable given the existing body of empirical data the field has amassed.
The cognitive aspects of electronic text processing are still poorly understood and the onus is on cognitive scientists interested in real world applications to find a suitable means of bridging the scientific and design communities. For the present author there are two primary motivations for pursuing this task. First, in the absence of the sound knowledge of humans and their abilities, preferences and weaknesses, the information revolution will be dictated by the growing technocracy, those infatuated with machines more than people. Even if one views with a cynical eye the claims that the information revolution is a turning point in civilization akin to the last great industrial revolution, there is no doubt that this technology is shifting society on several levels. As scientists of the human, it is our responsibility to be involved in such events.
Second, and perhaps less emotive, it is enriching to attempt impacting real world problems with science. If nothing more, it offers the sternest test of our work and our approach, and to the present author's mind at least, it is hugely enjoyable.
Brewer, W. (1987) Schemas versus mental models in human memory.In: I.P. Morris (ed.) Modeling Cognition. London: John Wiley and Sons, 187-197.
Carroll, J. and Campbell, R. (1989) Artifacts as psychological theories: the case of HCI. Behaviour and Information Technology, 8(4) 247-256.
Chapanis, A. (1988) Some generalizations about generalizations. Human Factors, 30(3) 253-268.
Cunningham, D., Duffy, T. and Knuth, R. (1993) The Textbook of the Future, In C. McKnight, A. Dillon and J. Richardson (eds.) Hypertext: A Psychological Perspective, London: Ellis Horwood, 19-50.
Dillon, A. (1992) Reading from paper versus screens: a critical review of the empirical literature Ergonomics: 3rd Special Issue on Cognitive Ergonomics, 35(10) 1297-1326.
Dillon, A. (1994) Designing Usable Electronic Text: Ergonomics Aspects of Human Information Usage. London: Taylor and Francis
Dyer, H. and Morris, A. (1991) Human Aspects of Library Automation. Gower Press.
Eason, K. (1988) Information Technology and Organisational Change. London: Taylor and Francis.
Gould, J.D., Alfaro, L., Barnes, V., Finn, R., Grischkowsky, N. and Minuto, A. (1987a) Reading is slower from CRT displays than from paper: attempts to isolate a single variable explanation. Human Factors, 29(3) 269-299.
Gould, J.D., Alfaro, L., Finn, R., Haupt, B. and Minuto, A. (1987b) Reading from CRT displays can be as fast as reading from paper. Human Factors, 29(5), 497-517.
Huey, E. (1908) The Psychology and Pedagogy of Reading, New York: Macmillan.
Just, M.A. and Carpenter, P. (1980) A theory of reading: from eye movements to comprehension. Psychological Review, 87(4), 329-354.
Kline, P. (1988) Psychology Exposed, or The Emperor's New Clothes London: Routledge.
Landauer, T. Egan, D, Remde, J., Lesk, M., Lochbaum, C. and Ketchum (1993) Enhancing the usability of text through computer delivery and formative evaluation: The SuperBook Project. In C. McKnight, A. Dillon and J. Richardson (eds.) Hypertext: A Psychological Perspective, London: Ellis Horwood, 71-136.
McKnight, C., Dillon, A. and Richardson, J. (1991) Hypertext in Context. Cambridge: Cambridge University Press
Norman D. (1986) Cognitive Engineering. In: D. Norman, and S. Draper (eds.) (1986) User Centred System Design. Hillsdale NJ: Lawrence Erlbaum Associates, 31-61.
Shackel, B. (1991) Usability - Context, Framework, Definition, Design and Evaluation. In B. Shackel and S. Richardson (eds.) Human Factors for Informatics Usability. Cambridge: Cambridge University Press. 21-37.
Suchman, L. (1988) Plans and Situated Action. Cambridge: Cambridge University Press.