Never Mind The Theory, Feel The Data: Observations On The Methodological Problems Of User Interface Design

Andrew Dillon and Cliff McKnight

This item is not the definitive copy. Please use the following citation when referencing this material: Dillon, A. and McKnight, C. (1995) Never mind the theory, feel the data: Observations on the design of Hypertext-based User Interfaces, In W. Schuler, J. Hannemann and N. Streitz (eds.) Designing User Interfaces for Hypermedia, Berlin: Springer-Verlag, 117-125.

Keywords. User-centred design; human factors


In the present paper we will seek to place the design of hypermedia-based user interfaces in the appropriate context of user-centred system design. In so doing we will outline what we believe to be the major methodological issues. As this will indicate, we view hypermedia design as essentially no different from any other kind interface design in terms of process and problem. Hence the methodological issues for hypermedia interfaces need to be seen as design problems rather than cognitive scientific ones. In this vein, we argue for a data-driven approach to design that seeks theoretical insight at the methodological and process level of design rather than the user level.

User-centred design and the software production process

The traditional phased or waterfall model of software design is not the most appropriate for effective design of devices involving multiple and complex human interactions. Adhering rigidly to this model means that it is often too expensive to make changes late in the development process - it is too hard to push water uphill - and so cost triumphs over common sense, never mind ergonomic principles.This often leads to poor or difficult-to-use designs. Given poorly designed software, discretionary users vote with their feet and those who have no discretion over its use expend great effort learning to work around it with reciprocal effectson satisfaction (a major component of current definitions of product usability).

A significant problem for phased design processes is the adequate mapping of user requirements to interface design. Not only can this mapping prove difficult, but requirements themselves are not always simple to elicit accurately and are known to shift with time. Hence, requirements both vary in terms of accuracy and reliability across a typical design process. User-centred design has arisen as an alternative philosophy for design that advocates the use of a variety of methods aimed at the iterative development of usable technology. As the name suggests, user-centred design implies a constant focusing of efforts on usability issues and requires the involvement of users at all stages in the design process rather than simply at the later evaluation stages (as in the more traditional models). This necessarily increases some early costs of the process but, it is agued, should lead to better products which are easier to learn, use and maintain.

However, it should be noted that iron-clad evidence for the cost-benefits of user-centred design is hard to find. Chapanis (1991) presents some success stories and some horror stories but his final conclusion still reads more like a statement of faith. However, analysis of well know technological disasters or near disasters such as Three Mile Island indicate that poorly designed technology can be a major cause of human error and system failure.

The human factors response to the increased awareness of the importance of ergonomic issues in software has been mixed though it is possible to distinguish several waves or movements over the last 20 years. Initially, the field of HCI expended much effort in producing guidelines for the design of user interfaces that were based on empirical comparisons of various features. Thus, a literature emerged replete with studies examining the value of say, menus over command languages, mouse input devices over function keys, and large screens over small screens. These were periodically drawn together into handbooks of design that could be used to support interface development in industry (see e.g., Smith and Mosier, 1986).

However it soon became apparent that on their own, such guidelines could be misinterpreted or be used to produce an interface that was extremely poor on usability. Such undesirable outcomes often resulted from an over-literal interpretation of guidelines and the failure to adequately address the contextual crucial to understanding usability.

A second wave of research tried to play down the role of empiricism in favour of more theoretical analyses of human-computer interaction. This work drew very heavily on cognitive psychology, and more recently cognitive science, to piece together models of the user or interactive process that could be employed at the earliest stages of design to test for usability without the need to run formal trials.

The most successful of these is the GOMS approach of Card, Moran and Newell (1983) which postulated the existence of a model human information processor whose performance times on routine simple tasks could be accurately predicted given appropriate task analysis.

Despite impressive academic work on such models, industry has been slow to embrace them for practical use. Problems lie in their limited range of application (they tend to be useful at this time only for tasks where users exhibit routine cognitive expertise and little discretion such as text editing) and the need for some knowledge or expertise in cognition or ergonomics to utilise such formal models reliably. Recent evidence from surveys of the European software industry conducted at HUSAT (Eason et al, 1986; Dillon et al, 1993) suggest that industrial take up has remained minimal.

Current emphasis has shifted towards the context in which technology is used, explicitly embracing the determining role of user, task and environmental variance in understanding the usability of any technology. Concomitant with this has been an increased awareness of the value of field research in understanding the nature of successful technology. Shackel (1991) for example outlines an approach to usability engineering that eschews formal modelling in favour of the operationalisation of benchmarks that a design must meet. Specifying usability criteriainvolves not only a sound analysis of the task domain and user characteristics but also a broad knowledge of possible usability metrics.

Operational definitions of usability incorporating measurable aspects of performance such as effectiveness and user satisfaction are possible and allow a practical human engineering perspective to be brought to interface design. The targets to be met are derived either by reference to standards or through negotiation between design team and intended users or clients and may even be made simply by reference to competitive systems (e.g., new design must support at least 10% greater accuracy or 5% less training, 20% greater satisfaction, etc.)

Giving HCI away?

Running in parallel with these developments in method and approach to user interface design has been the marked tendency for the human factors profession to market its knowledge and skills in a form transferable to the design community. The guidelines approach to design explicitly embraces this view and the modelling school certainly set out with this in mind but problems in the transfer made this more difficult. Current contextual work still has not overturned this perspective but we believe that the appropriate professionals to tackle the usability issues in the design process are human factors specialists themselves. Further, we would argue that human factors professionals should be an integral part of the design team, not simply expected to certify an interface as user-friendly once it has been set in the software equivalent of concrete. Current industrial practice fails to support such views and this is patently not good enough. To ensure that software design follows a user-centred course it is essential that human factors professionals become involved directly in design and do not content themselves with developing tools or methods for non-ergonomists to use or with overseeing evaluations late in the development process. The abstract and sometimes nebulous nature of human factors knowledge renders it difficult to apply without substantial experience and attempts at transferring this technology through tools rather than people is unlikely to bring much success.

Designing hypermedia-based user interfaces

How, then, is the design of hypertext or hypermedia different from any other interactive software application? We believe there is no inherent difference in principle and user-centred design practices are the most appropriate in this domain. However, the fact that hypermedia-based interfaces are frequently being used in novel applications renders it very difficult to perform formal task analysis, specify the context of usage, or elicit user requirements to any degree of precision. Often, users cannot conceptualise what an advanced hypertext interface will enable them to do and the quality of an interactive application is usually only appreciated with experience. Even where users are capable of conceptualising a novel application reliably, possibly with the use of good prototypes, potential problems such as navigational difficulties or loss of context may not seem likely and therefore are not appreciated until it is too late. In such cases we believe it is even more important to involve human factors specialists in the design process. Added to the novelty of applications for hypertext is the complexity of behaviours supported by modern graphical user interface (GUI) techniques. Again, it requires a thorough understanding of user characteristics and requirements to constrain the range of GUI design options.

Information usage as a psychological process

In designing hypermedia interfaces, few researchers or developers have been able to demonstrate significantly better performance for electronic information over paper documents. This, despite the much lauded arrival of hypermedia as a liberating technology, reflects a failure to understand what readers or users actually do with documents. To a large extent paper has retained its primary position in our lives due to its inherent flexibility and usability. Most people with experience of both media still prefer paper.

In order to design better information carrying media such as hypertext, designers need to understand the process of reading as a task-related information processing activity. However, current models we have of reading are limited to laboratory-derived theories of activities such as word recognition or sentence comprehension rather than ecologically-relevant representations of information usage. Similarly, ergonomists and applied psychologists have examined reading from screens by concentrating on outcome measures such as speed and accuracy at the expense of process issues (see Dillon (1992) for a review). We now know some important features in screen presentation that influence reading speed such as image polarity and resolution (Gould et al, 1987). Unfortunately, given the highly contextual nature of usability, it is unlikely that it will ever prove possible to prescribe interface design purely on a feature or attribute basis. More importantly, the process of information usage is so inherently multi-layered that other issues such as the perception of structure, navigation and location, manipulation and so forth may need to be addressed in particular ways depending on the task, the information type under consideration and the users. In other words, even meeting stringent screen ergonomic criteria will be no guarantee of success in the electronic document domain as has been reported elsewhere (Dillon et al, 1991).

The absence of applicable theory and the problems of user interface design

Currently, no relevant discipline psychology, information science or computer science can provide an account of the reading or information usage process adequate enough to guide design practice. As a result interactive system design often proceeds on the basis of heuristics, subjective bias or common sense assumptions about reading and information usage. These may be accurate or inaccurate but we have no way of knowing for certain until the system is actually evaluated properly.

Interestingly enough, this problem is invariant across most design domains. The literature on architectural practice, mechanical engineering and manufacturing design (see e.g., Darke, 1979; Lawson, 1980) indicate that it is in the nature of design practice that non-algorithmic procedures are followed in all designersU attempts to produce solutions. Typically, designers seek to generate a potential solution and then use this as a means of better understanding the problem. This distinction with classical scientific models of problem-solving has been well demonstrated empirically by Lawson (1980) who examined groups of scientists and architectural students tackling a constrained block-design problem. The scientists tended to proceed by logical progression, taking the problem statement and attempting to derive a step by step solution (much as one would expect scientific problem-solving to occur). Design students clearly differed in that they quickly produced potential solutions which they used to check the problem and make better progress.

The net result of this work is that contemporary cognitive theories are no guide to design practice. Yet it is precisely such cognitive theories that underlie most work on Human-Computer Interaction and therefore empirical methods must be followed. User-centred design methods are essentially predicated on empiricism as the design of even modestly successful applications such as SuperBook (Landauer et al, 1993) demonstrate. These developers report a series of experiments run over several years as they designed an electronic textbook to support information access. While it is common to see SuperBook cited as a hypertext success story it is important to realise that early developments of SuperBook actually led to significantly poorer performance relative to paper in certain tasks.

Although Landauer et al. managed to redesign the hypertext effectively they admit to being able to do so only on the basis of empirical data that had both highlighted substantial delays at certain task points due to poor system response rate and shown users to be employing sub-optimal search strategies. Even modifying the first version successfully still left room for further improvement as further evaluations indicated other sources of user difficulty.

The SuperBook project represents the classic user-centred design process but it points to several difficulties hypermedia interface designers face. First, user trials are expensive. Even where cheap prototypes can be utilised, locating and training representative users for evaluation purposes and analysing the subsequent data is not cheap and thus frequently resisted by design teams.

Second, the problems identified in the user trials were not complex (e.g., responserate, poorly formulated search criteria, etc.) With hindsight these seem obvious, yet none of this highly talented design team predicted them. This is the norm and will occur in all design processes until we have theories of the user that can predict such responses to the interface. When will we have them however? Not soon is the answer and certainly not in our lifetimes if we continue to expect cognitive psychology or information science to provide the answers. The naturally occurring variance in humans and the contextual determinism of many activities render classic theory building impossible if our goal is the establishment of a general user psychology based on small pockets of laboratory-derived knowledge.

Can such methods be made cost-effective?

Given the fact that human factors activities in the software development process costs money, it is essential to ensure that it is money well spent. This means developing cost-effective methods which provide quality data to inform the design process, reducing the development and testing time, reducing the training time, improving the maintainability or at least some combination of these benefits.

A variety of methods exist but in order for them to be cost-effective we must know when to use each. For example, modern rapid prototyping tools can be used to generate very realistic looking interfaces but there are times when paper and pencil will provide as good data. In some cases, an expert walk-through will be an appropriate way of suggesting design improvements but there comes a point when such methods can contribute no more. A properly designed experimental comparison may provide the best means of deciding between functionally equivalent versions of an interface but at some stages in the design such experiments are impractical. It may be easy to collect performance data such as time taken or errors made, but in many cases verbal protocol analysis will yield more influential data.

In 1987 Shneiderman wrote that high-level theories were beginning to emerge, citing such examples as his own syntactic/semantic model of user knowledge, the GOMS model of Card, Moran and Newell (1983), the four-level models of Foley and Van Dam (1982) and Norman (1984) (later to become a seven-level model) and the production rules model of Kieras and Polson (1985).

The extent to which any of these has impacted the design of real products is not clear but few examples of success are available. Hence, at this point in the development of a theory of design, a reverse engineering approach may well prove instructive. That is, consider some successful or well-designed products and retrospectively study the design process through which they evolved. Some examples, such as the SuperBook project mentioned above, have been well documented and offer a rich source of information and insight but other examples need to be sought out or described. With this approach it should be possible to develop an encompassing methodology which prescribes the appropriate tools and techniques at each point in the design process.

An obvious drawback is the lack of well-documented design processes or the post hoc rationalisation of design decisions that can occur but this is a fact of life we must deal with appropriately. What is patently clear from the literature is that we donUt need another set of design guidelines Q there are too many already, and anyway designers do not use them very effectively. If the fundamental concept of user centred design is accepted, then it is appropriate for a human factors specialist to form an integral part of the design team throughout the design process. In so doing they can provide the data we need as a discipline to draw up reliable records of design rationales.

What hope is there for theory?

The above suggestions should lead to a more adequately specified user-centred methodology, but will it continue to be a theory-free methodology? If human factors continues to give itself away (and thereby fail to influence the design process in any real sense) then the answer is almost certainly yes. However, with human factors specialists involved in design, learning more about the real processes involved in design it should be possible to perform meta-analyses of the diverse experiences and thereby develop suitable theories.

Perhaps the search for theory is looking in the wrong place. Do we need a theory of the user at all? It may be that our problems are solvable methodologically at least by articulating a theory of design which provides adequate guidance on producing usable artefacts. To an extent the user-centred design approach is one such design theory (although its precise articulation and qualification for that title may be questioned). As yet however, we are only beginning to understand the cost-benefits of certain evaluation methods (Nielsen 1992) or the best means of ensuring reliable user participation (Eason 1989). These problems are tractable and would seem rich in data given the widespread occurrence of design activity. A shift of emphasis in research to address these questions would appear useful.

There is nothing special about the design of hypermedia systems which warrants their being afforded a special theory or methodology. The principles of user centred design can be applied equally to such systems and any theory which arises out of the design of products should have sufficient generality to apply to hypermedia.


Chapanis, A. (1991) The Business Case for Human Factors in Informatics. In B.J Shackel and S. Richardson (eds.) Human Factors for Informatics Usability. Cambridge: Cambridge University Press. 39P71.

Card, S. K., Moran, T. P. and Newell, A., (1983) The Psychology of Human-Computer Interaction. Hillsdale NJ: Lawrence Erlbaum Associates.

Darke, J. (1979) The primary generator and the design process. Design Studies, 1, 36-44.

Dillon, A. (1992) Reading from paper versus screens: a critical review of the empirical literature. Ergonomics, 35(10), 1297P1326.

Dillon, A., Sweeney, M. and Maguire, M. (1993) Usability Engineering in the European IT Industry: current practices. In: J. Alty, S. Guest and D. Diaper (Eds.) People and Computers VIII. Cambridge: Cambridge University Press.

Dillon, A., Richardson J. and McKnight, C. (1991) Institutionalising human factors in the design process: the ADONIS experience. In E. Lovesey (Ed.) Contemporary Ergonomics 1991. London: Taylor and Francis.

Eason, K. (1989) Information Technology and Organisational Change. London: Taylor and Francis.

Eason, K. D., Harker, S. D. P. and Poulson, D. F. (1986) Preliminary investigations into the use of human factors data in the design process. HUSAT Memo N< 377, Loughborough University of Technology.

Foley, J. and van Dam, A. (1982) Fundamentals of Interactive Computer Graphics. Reading, MA: Addison-Wesley.

Gould, J. D., Alfaro, L., Finn, R., Haupt, B. and Minuto, A. (1987) Reading from CRT displays can be as fast as reading from paper. Human Factors, 29(5), 497-517.

Kieras, D. and Polson, P. (1985) An approach to the formal analysis of user complexity. International Journal of Man-Machine Studies, 22, 365P394.

Landauer, T., Egan, D., Remde, J., Lesk, M., Lochbaum, C. and Ketchum, D. (1993) Enhancing the Usability of Text through Computer Delivery and Formative Evaluation: The SuperBook Project. In C. McKnight, A. Dillon and J. Richardson (Eds.) Hypertext: a Psychological Perspective. Chichester: Ellis Horwood.

Lawson, B. R. (1980) How Designers Think. London: Architectural Press.

Nielsen, J. (1992) Finding usability problems through heuristic evaluation. In Proceedings of CHIU92. New York: ACM, 373-380.

Norman, D. (1984) Design rules based on analyses of human error. Communications of the ACM, 26(4), 254-258.

Shackel, B. (1991) Usability Context, Framework, Definition, Design and Evaluation. In B. Shackel and S. Richardson (eds.) Human Factors for Informatics Usability. Cambridge: Cambridge University Press. 21-37.

Shneiderman, B. (1987) Designing the User Interface: Strategies for Effective Human Computer Interaction. Reading, MA: Addison-Wesley.

Smith, S. L. and Mosier, J. N. (1986) Guidelines For Designing User-Interface Software. Report 7 MTR-10090, Esd-Tr-86-278, Mitre Corporation, Bedford, MA.