Evaluation of a user interface supporting multiple image query models

Javed Mostafa and Andrew Dillon

This item is not the definitive copy. Please use the following citation when referencing this material: Mostafa, J. and Dillon, A. (1996) Design and Evaluation of a User Interface Supporting Multiple Image Query Models. Proceedings of the 59th Annual Conference of the American Society for Information Science, Baltimore, MD, USA, October 21-26, 1996.

I. Introduction

Digital image use occurs in many fields. For example, in the area of medicine, huge volumes of digital images are routinely generated for diagnostic purposes, sometimes reaching gigabyte range (Gitlin, 1992). Besser (1990) has designed highly innovative image- based selection systems to improve access to visual resources in architecture, anthropology and art collections. There are also signs that museums and archives have accepted the value of digital image technology in their environments (Besser, 1991; Wentz, 1989).

Unfortunately, the technology for effective storage and retrieval of images has not kept pace with the technology of image production. The situation has reached such a critical stage that National Science Foundation (NSF) organized a special workshop on the topic of visual information management (Jain, 1993). The NSF workshop report stated, "It would be impossible to cope with this explosion of image information, unless the images were organized for retrieval. The fundamental problem is that images, video, and other similar data differ from numeric data and text data format, and hence they require a totally different technique of organization, indexing and query processing."

This paper addresses the critical need for different techniques in improving retrieval of digital images. Our position is that the user interface is the principal component responsible for facilitating retrieval in databases. Therefore, to assure effective access design of interfaces need to be improved.

II. Problem

There are few empirically-oriented studies on image retrieval and theories that can provide concrete design guidelines are even rarer (Mostafa, 1994a). Chang and Hsu (1992), in a review of the state-of-the art of image information systems, concluded that commercial databases which support image management treat images as "black boxes", implying that the richness of information present in images was not exploited to provide multiple access paths, thus limiting the access to image content. Voicing a similar concern in a survey of imaging technology, Lynch (1988) noted that databases with facilities for image management "cannot interpret the semantics of image..." (although the implication that the semantics of text are tractable by current technology is debatable, at least to the present authors). The interface, via which the query language is implemented, must take into account the difference between visual and verbal information. To truly exploit this difference, the interface must have provision for query formulation based on purely visual means. However, we may expect that any image database would contain both visual and verbal information therefore, a major problem is to provide multiple access paths to image information. We address this problem, with the aim of developing and evaluating interfaces that permit both verbal and visual means of access to images.

III. Related work

A useful graphical technique applied in interfaces to image databases is a visual matrix of image records (Beard, 1991; Besser, 1990). The visual matrix is produced by displaying small uniformly shaped scaled-down images in multiple rows. According to Tufte (1990), the display of similarly shaped small multiples allows "uninterrupted visual reasoning." Viewers can thus make comparisons "relying on an active eye to select and make contrasts rather than on bygone memories." The visual matrix technique has been successfully applied in the University of California at Berkeley's ImageQuery system (Besser, 1990).

A substantial amount of past research has concentrated on developing visual query languages (Arndt, 1990; Mostafa, 1994). These query languages can be roughly divided into two groups: (1) those which rely on pattern recognition algorithms and take advantage of geometric or spatial information extracted from images and (2) those which treat images as whole objects and use verbal descriptions of or pointers to images. Query languages of the former type presuppose that information on shapes, sizes, and locations of objects present in images are available and that the objects have been indexed in terms of such information.

Query languages that treat images as complete entities are more common. They are usually developed by extending established verbal query languages. For example, Image Structured Query Language (ISQL) is one of the better known extensions of structured query language (SQL), designed to retrieve medical images from relational databases (Assmann, Venema & Hohne, 1986). ISQL and languages similar to it have one significant limitation. In these systems queries can only be formulated by using verbal information. The chief difficulty arising from this limitation is that certain information represented in images may be extremely difficult or highly inconvenient to verbalize. For example, imagine a user wishing to find a picture of a woman with eyes that "remind him of his mother."

To overcome the difficulty of verbalizing qualitative or subjective information, what is necessary is a query mechanism that allows users to point to any image displayed on the screen and perform "Find like this" searches (Chang & Hsu, 1992; Frasson & Erradi, 1979). In short, a language is needed that allows users to utilize images for qualifying searches for other images. This method of searching is similar to the Query-by-Example (QBE) search language. In Frasson and Erradi (1979), a visual query language for a database interface is described which is an extension of the QBE language. In this system, users are presented with a menu of simple iconic representations of human body parts. Users can select one or more of these icons to initiate or constrain searches.

"Find like this" visual querying technique can be better understood by considering its close similarity to the idea of browsing as a method of searching. It is probably most useful when the search goal cannot be precisely articulated beforehand and the search is constrained using data discovered during the search process itself. Gecsei and Martin (1989) describe a browsing technique to search for images whereby users are presented with scaled-down visual surrogates representing the images in the database. When the user selects a particular visual surrogate for closer investigation, the system automatically retrieves other related visual surrogates. As more surrogates are selected, the attributes for establishing relationship among surrogates increases. In this way, the user can progressively narrow the search to one or two highly relevant surrogates. Subsequently, by using a surrogate the user can retrieve the corresponding image from the database.

In a large database with hundreds of images, applying the visual browsing method alone may involve several iterations of search-retrieve cycles before the desired image is found. For known-item searches, where a particular item known to the user in terms of some clues is sought, multiple search iterations can be avoided by allowing key word or index term based searches for images. Certain information associated with images, for example, date of creation, name of the artist, or the system used for creation, are searchable more naturally as verbal terms. Evidence exists from past research that also suggests a visual query facility alone may not be sufficient to allow users to effectively search for images (Seloff, 1990). In an image database designed for NASA, equipped with both visual and verbal querying facilities, it was found that users preferred the verbal approach when conducting known-item searches (Seloff, 1990). The NASA system also successfully demonstrated how an online index containing both visual and verbal information can be used to support query formulation in an image database.

IV. ViewFinder Interface Design

The work reported here is part of a continued research effort known as Project ViewFinder. For the ViewFinder project, a database containing visual and verbal information on modern films (approximately 100 records) was developed using the Oracle DBMS environment. The overall architecture of the system is based on a client/server design model. The database functions as the server, and interfaces are created as clients. In Oracle, interfaces can be developed using various high-level development environments (C, C++, Pascal, HyperCard, etc.). For interface development we relied upon Oracle Access, HyperCard and additional external routines written in C.

Data and Index

The index was based on records whose content included both visual and verbal information. Multiple records were created for films, by selecting a set of visual surrogates (relevant segments of films converted to digitized images) and a set of verbal labels associated with films. Figure 1 shows an example of a ViewFinder record.

    TITLE(S): Batman

    ACTOR(S): Jack Nicholson, Michael Keaton

    DIRECTOR(S): Tim Burton

    MAJOR SUBJECT(S): Comic Adventure

    MINOR SUBJECT(S): Fire & Smoke

    RATING: PG-13

    PLAY TIME: 124 minutes

    RELEASE DATE: 01-JUN-89

    LANGUAGE(S): English

    CLIP NAME: Fire Tunnel

    CLIP ID: batm0008

    VISUAL SURROGATE:

    Figure 1. A film record for the film Batman.

O'Connor (1985) and Markey (1986) had provided some excellent guidelines for image indexing and these guidelines were relied upon to create records and develop the index. During indexing, records were linked to each other whenever common elements in the four major fields (title, actor, director and subject) were found. This provided a way to group related images which is exploited during searching. For additional details on the indexing procedure followed in the ViewFinder Project the reader is directed to Mostafa (1994b).

Interface Components & Functions

Visual-Verbal Interface

The major interface components utilized for performing the above functions are the online index, the verbal index selector and the visual matrix control panel (Figure 2). As evident in Figure 2, all these components are represented in demarcated areas. The online index consists of two parts: a visual matrix and a scrollable index-term window. The visual matrix is made up of nine uniformly shaped rectangular windows (Figure 2, key 1). The windows are designed to be 160 pixels in width and 120 pixels in height (maintaining a 4-to-3 aspect ratio). To support the sequential browsing function, there are arrow keys and a record display counter placed on the lower right hand side of the screen (Figure 2, key 5) . If a search retrieves more than eight records, the record display counter would show the number of additional records that can be viewed. The interface provides a function that allows the user to browse sequentially, forward or backward, the retrieved set of surrogates (arrow key, Figure 2, key 5).

The design of Visual-Verbal interface allows users to perform a search using only verbal data, only visual data, or a combination of visual and verbal data. The interface allows users to execute various types of search operations and control the output characteristics of film records (visual and verbal information).

A major type of search executable on the Visual-Verbal interface is initiated by using verbal terms. To execute such searches, the user first selects a particular verbal index list among four choices: title, actors, directors, and subject. Upon selection of one of the index lists, the system displays the corresponding list of index terms in a scrollable window. Each of the search functions requires selection of one term from the index list. As response, the information retrieved and displayed are the visual surrogates of records that contain the terms selected.

Figure 2. Major components of the Visual-Verbal interface.

Another major search function is initiated by using visual surrogates. At any moment there is at least one visual surrogate displayed on the screen (the total number displayed depends on number of total films covered in the database or the size of the last hit set). The user can employ any visual surrogate to retrieve records that share one or more common elements with the selected visual surrogate. To do this, the user transfers a displayed visual surrogate to the central part of the screen (an action called "Promote"). This in turn prompts the system to retrieve a ranked set of visual surrogates from the matching records. The ranking is determined according to number of matches between the promoted record and other records, based on the four criteria: topic, title, actor, and director. The ranked visual surrogates are displayed in order of decreasing relatedness to the promoted surrogate. In the visual matrix the ranked visual surrogates are displayed from left to right, in row order, starting with the top left corner and ending in the bottom right corner.

The user can also choose to display the verbal terms associated with a visual surrogate by using a function called "Describe." In retrieving all the index terms associated with a visual surrogate, the user may uncover other more specific or general index terms, which can be subsequently used to narrow or broaden a search.

Verbal Interface

An alternative interface was also designed for evaluation purposes. The major components of the Verbal interface allow searching for film records using verbal terms only. Additional functions were built-in to control the display of search results. In the Verbal interface four major search functions can be executed: Directors, Movies, Actors and Subjects. These search functions are available to users through four different online indices (similar to the one displayed in the Visual-Verbal interface). To execute search functions, users must select terms from online indices. All the major design constrains established in section 3 were applied toward the design of the Verbal interface. The Oracle client/server environment allowed development of the secondinterface as another client. Hence, the same database was used by both interfaces and it assured that users of the Verbal interface can use the same index terms available to users of the Visual-Verbal interface. For evaluation purposes, display of visual surrogates was suppressed in the verbal interface.

V. Experiment and analysis

In the previous section, two interfaces designed for this research were described. The main purpose for developing two interfaces was to gather empirical evidence of interface use in a comparative fashion. Toward that end, an experimental methodology was devised. Interface effectiveness was operationally defined in terms of several dependent variables. Here, we report on the following two: search completion and error. The type of interface used was the independent variable (henceforth referred to as treatment), consisting of two levels: Visual-Verbal and Verbal.

Experimental Groups

18 subjects (7 male/11 female) were randomly allocated to two conditions (1=visual -verbal interface, 2=Verbal interface) All subjects were graduate students in Library and Information Science, and none had any prior experience with these interfaces. Subjects in both groups received a small monetary compensation for their participation.

Search Questions

Eight search questions were presented to users. All tasks were solvable using either interface. Based on the type of clues provided, there were two questions that were purely topical searches (only topical clues were provided), while others had known-item clues (actor, director, and movie title). The questions also differed from each other in the number of steps required to answer them correctly. Using either the Visual-Verbal or the Verbal interfaces, the first six questions required a minimum of two steps to answer and the last two questions required a minimum of four steps to answer. Pilot tests revealed that all questions could be completed in about 40 minutes.

Procedure

All subjects received a 10 minute tutorial on basic techniques for communicating with the computer and executing the major interface functions. Subjects were also told that there was no specific time limitation for completion of each task, but that the entire search session (the eight search questions and the two opinion questions) should be concluded in about an hour. Subjects were then presented with their search questions and instructed to begin.

Data Collected

All search sessions were videotaped with a camera focused on the computer screen. Several forms of data were collected

    (1) Numbers of complete and incomplete searches.

    (2) Data Errors (wrong data used).

    (3) Formulation errors (wrong or redundant function used).

    (4a) Numbers of index terms used in each search task by Visual-Verbal users.

    (4b) Numbers of visual surrogates used in each search task by Visual-Verbal users.

The data error category was used to measure how effectively the interfaces assisted users in selecting search data. The formulation error category concerned the use of inappropriate search function (e.g., executing a "Director" function when no director name is provided or asked for), or if the users executed functions in the wrong order (e.g., executing "Describe" before any visual surrogates are retrieved). The frequencies of index term and visual surrogate use were collected for the Visual-Verbal interface to identify patterns of search-data use. Specifically, the data were collected to identify whether a particular type of clues (visual or verbal) was preferred by users and to identify whether there was a relationship between particular type of clue (visual or verbal) used and the type of search task.

VI. Findings

Search Completion

The Visual-Verbal group completed 83% of search questions correctly (n=10) and the Verbal group completed 84% of search questions correctly (n=8). The difference is not statistically significant.

Errors

Table X shows the mean number of errors of both types for each of the two groups. The differences are not significant statistically. For both interfaces, more formulation errors were committed than data errors.

Use Patterns of the Visual-Verbal interface

The Visual-Verbal interface users could choose between visual surrogates and verbal terms while conducting their searches. This provided an opportunity to compare patterns of verbal term and visual surrogate use by this particular group. The compiled data showed that the mean rate of verbal term use was 13.4 and the mean rate of visual surrogate use was 4.2. Data on verbal term and visual surrogate use per search question were also collected (Figure 3). Verbal term use ranged from a low of 11 (Question 3) to a high of 27 (Question 7). In all cases verbal term use was higher than visual surrogate use.

Figure 3. Frequency of verbal term and visual surrogate use per question by

Visual-Verbal group.

Use of visual surrogates varied greatly from question to question. For two particular questions, Questions 2 and 4, no visual surrogate was used. When frequency of visual surrogate use was separated into specific types based on clues or search steps, we found two interesting patterns. Visual surrogates were more frequently used for topical searches (28) than known-item searches (14).

VII. Conclusions

The Visual-Verbal interface did not lead to greater completion rates and the patterns of visual surrogate and verbal term use on the Visual-Verbal interface showed that users consistently used more verbal terms than visual surrogates across all questions. Search-performance data related to errors showed that many users had difficulty with the present approach of visual searching. This may have been a factor behind relatively low visual surrogate usage. The present approach, designed around the Promote function, should be improved. The distribution of frequencies of visual surrogate use between topical and known-item searches was not equal. The frequency data showed that more visual surrogates were utilized in the topical questions than the known-item questions. Past studies on searching have found topical searching is difficult to perform and usually generates more errors than known-item searching (Moore, 1981; Borgman, 1983). Perhaps the difficulty in conducting the topical searches led the users to attempt alternative means of finding solutions, which involved exploring visual means of finding solutions. More visual surrogates were used for four-step searches than two-step searches. This difference also seem to validate the point that more difficult search tasks may encourage users to explore different search strategies (assuming more steps caused increased search complexity).

A broad factor responsible for low visual surrogate use may be related to users' lack of skill in solving the search problems visually. The cause behind users' lack of such skills may have roots in the way visual information is cognitively processed. The abilities associated with processing of visual information can be influenced by both physical and environmental factors. In the next section, we briefly consider some basic attributes of human cognition and how they may have influenced the use data gathered in this research.

Can Use be Explained by Attributes of Human Cognition?

It has been shown, through a series of carefully constructed experiments performed on patients with severed corpus callosum (a network of nerves that connect the left and right brain hemispheres) and through numerous other follow-up experiments performed on normal individuals, that humans' left and right brain hemispheres possess independent specializations over certain cognitive processes (Springer & Deustch, 1989). The left hemisphere is usually attributed with specialization over verbal, temporal and rational thought processes. The right hemisphere is said to possess specialization over non-verbal, visuo-spatial, and intuitive thought processes. Paivio using his dual coding theory, however, has shown that the cognitive specializations of the two hemispheres are not quite as independent as previously thought (1989). According to Paivio, there is strong evidence of referential interconnections between the two hemispheres and for certain types of cognitive tasks both hemispheres may make relevant contributions: for example, in comprehending and retrieving "image- rich" words, such as "parade" or "circus." Paivio also convincingly demonstrated that individuals may posses different abilities for recognizing images (1989). Other theories have been put forth that acknowledge the hemispheric specialization but assume that differences in imagery abilities among individuals may be strongly influenced by environmental factors. Sless (1981) had argued that differences in imagery abilities are a result of long-term shaping of the mental abilities through implicit or explicit messages received from parents and teachers. According to Sless, teachers and parents, through educational training, reinforce the proposition that pictures are very simple and self-evident and no special training in their reading is required. Theorists such as Ornstien and others have gone even further and implicated Western culture itself (Springer & Deutsch, 1981) in its over-emphasis of the abilities associated with the "left-hemisphere" and the neglect of the abilities associated with the "right-hemisphere." Hence we see that there are certain inherent human attributes, perhaps the result of a combination of physical and external factors, that cause us to process visual and verbal stimuli differently. We further know that cognitive abilities associated with the processing visual and verbal stimuli may differ from one individual to another. It is, therefore, natural to ask what role the representation of information -- visual or verbal -- plays in influencing cognitive processes? Questions such as these have been studied extensively in educational psychology and less extensively in the field of interface design.

In short, the proposition that representational characteristics of information can be manipulated to achieve high levels of success in retention and recall is still an unsettled issue. Rohr, commenting on behalf of interface designers, provided an optimistic assessment of the role of visual representation in the interface. Rohr gathered evidence from numerous past studies and showed that when interfacecomponents are represented visually and spatially as opposed to verbally, comprehension and recall can be improved. Evidence from educational psychology, however, has been less conclusive. Hunter, Moore and Wildman (1982) investigated the relationship between the type of information representation and retention and recall. They cited many past research results that support the superiority of visual representation over verbal representation. However, when they compared the influence of verbal representation with a combination of visual and verbal representation on retention and recall tasks, they found no significant difference in performance between groups exposed to the two forms of representations. Lindstroem also analyzed numerous past studies on individual differences in learning from visual and verbal representations and stated that there is no substantive evidence to conclude that the visual form of representation is necessarily superior to verbal form (1980).

Reflections on Findings of this Research

In reflecting on the major findings of this research, the results seem to suggest more agreement with the findings of the educational psychologists than with what the interface designers found. No statistically significant differences were found on the major performance variables, between an interface based on both visual and verbal information and an interface based on only verbal information. However, we give much weight to the conclusions of researchers such as Ornstien and Sless (discussed earlier), who stated that our culture and educational system are much more supportive toward development of verbal skills than visual skills -- so much so, we contend that it is entirely possible that our pool of subjects was more skilled in performing the search tasks verbally than visually. In this research, individual differences in visual and verbal abilities were controlled, at least theoretically, as we randomly assigned subjects to the two groups. But the methodology applied could have been made much stronger if we also had collected data on visual and verbal abilities of the participants (using established instruments). Then we could have removed the effect of visual and verbal abilities from the outcomes to make a more fair comparison between the two groups.

Recommendations

As far as the design of specific components goes we were unable to show, on the basis of a broad set of performance criteria, that a Visual-Verbal interface is superior to a Verbal interface. However, we must point out that the capacity of a Visual-Verbal interface to display film images along with related textual information is a truly powerful and desirable feature. Despite the general lack of visual literacy in our culture, the general trend in education and recreation is toward more intensive use of visual information. The advantages that a Visual-Verbal interface may provide to a student of film or a prospective film viewer may far outweigh the complexity (or cost) associated with development of a Visual-Verbal interface. In this research, we have demonstrated that such a Visual-Verbal interface can be developed. The use analysis data showed that with the Visual-Verbal interface high completion rate can be maintained.

Certain weaknesses of the Visual-Verbal interface, however, were revealed. It is our position that with certain modifications of the design constraints applied in this research, broad-based effectiveness, covering the major performance criteria, can be achieved. Below, some design enhancements of the Visual-Verbal interface are suggested.

1. Search paths: Users need guidance in performing common search tasks. We suggest that such information be made available to users through graphical navigational aids that make the search paths visible and explicit in the interface.

2. Closer linking of visual and verbal information: Users had difficulty in grasping the relationship between visual surrogates and verbal terms. A suggestion provided by a user can aid in better presentation of visual and verbal information. Pop-up fields can be placed near visual surrogates, which when activated would show all the related verbal terms. We suggest a minor modification to this idea. Instead of the pop-up field being empty in the inactive state, it could be used to display the title of the film the visual surrogates represent. This way, the user could quickly scan the visual matrix and be able to tell which films have been retrieved.

3. Better support for topical searching: In the Visual-Verbal system, the number of topical index terms assigned to each movie should be increased without increasing the current number of attributes. The topical terms should be grouped and organized under the appropriate attribute headings in the index- term window.

4. Provision for narrowing large hit sets: This can be achieved with a relatively easy modification to the current Visual-Verbal interface. In the Visual-Verbal system an internal counter is used to keep track of size of the hit set. If the hit set is large, the user should be prompted for permission before displaying the set. If the user chooses not to display a large hit set but narrow the set further, two options should be provided. First, the system should prompt the user to select one or more visual surrogates from the screen. The system then can narrow the hit set by only including in the set those surrogate that are related to the surrogate(s) the user selected (relationship based on common actor, director, title or subject). Second, the user should be prompted to select additional verbal terms which then can be used to narrow the hit set. In both cases the narrowing will take place by an implicit Boolean "AND" operation. Implicit means of performing Boolean operations should be given preference over explicit means to minimize the complexity of the interface.

Acknowledgment

Access to facilities in the ICON laboratory at the University of Texas at Austin was crucial in initiating Project ViewFinder. Presently, the Usability Laboratory at the Indiana University, is used as the development and evaluation environment for Project ViewFinder. We are grateful to principal founders and funders of both laboratories.

Bibliography

Arndt, T. 1990. A survey of recent research in image database management. In S-K. Chang (Ed.), Proceedings of the 1990 IEEE Workshop on Visual Languages, (pp. 92-97). Los Altimos, CA: IEEE Computer Society Press.

Assmann, K., Venema, R. & Hohne, K.H. 1986. The ISQL language: A software tool for the development of pictorial information systems in medicine. In S-K. Chang, T. Ichikawa & P. Ligomenides (Eds.), Visual Languages (pp. 261-283). New York, NY: Plenum Press.

Beard, D.V. 1991. Computer human interaction for image information systems. Journal of the American Society for Information Science, 42(8), 600-608.

Besser, H. 1990. Visual access to visual images: The UC Berkeley image database project. Library Trends, 38(4), 787-798.

Besser, H. 1991. User Interfaces for Museums. Visual Resources. 1991; VII: 293-309. ISSN:

Bordogna, G., Carrara, P., Gagliardi, I., Merelli, D. Naldi, F, & Padula, M. (1990). A system architecture for multimedia information retrieval. Journal of Information Science, 16, 229-238.

Borgman, C. 1983. End user behavior on the Ohio State University's libraries' online catalog: A computer monitoring study. (OCLC Report No. OCLC/OPR/RR-83/7). Dublin, OH: OCLC Online Computer Library Center, Inc.

Brodie, K.W., Carpenter, L.A., Earnshaw, R.A., Gallop, J.R., Hubbold, R.J., Mumford, A.M., Osland, C.D., & Quarendon, P. (Eds.). 1992. Scientific visualization: Techniques and applications. Berlin: Springer-Verlag.

Brookes, D. 1988. System-system interaction in computerized indexing of visual materials: A selected review. Information Technology and Libraries, 7(2), 111-123.

Cavallaro, U. & Paolini P. (1993). HIFI: Hypertext interface for information: Multimedia and relational databases. The Electronic Library, 11(2), 65-71.

Chang, S-J. & Rice, R.E. 1993. Browsing: A multidimensional framework. In M.E. Williams (Ed.), Annual Review of Information Science and Technology, vol. 29. Medford, NJ: Learned Information.

Chang, S-K.; Yan, C.W.; Dimitroff, Donald .; Arndt, T. 1988. An Intelligent Image Database System. IEEE Transactions on Software Engineering. 1988 May; 14(5): 681-688.

Chang, S-K. & Hsu, A. 1992. Image information systems: Where do we go from here? IEEE Transactions on Knowledge and Data Engineering, 4(5), 431-442.

Costagliola, G., Tortora, G., Arndt, Timothy. 1992. A Unifying Approach to Iconic Indexing for 2-D and 3-D Scenes. IEEE Transactions on Knowledge and Data Engineering. 1992 June; 4(3): 205-222.

Frasson, C. & Erradi, M. 1979. Graphics interaction in databases. In A. Blasser (Ed.), Database Techniques for Pictorial Applications (pp. 291-301). Berlin: Springer-Verlag.

Fox, E. 1989. The coming revolution in interactive digital video. Communications of the ACM, 32(7), 794-801.

Gecsei, J. & Martin, D. 1989. Browsing access to visual information. Optical Information Systems, 9(5), 237-241.

Gitlin, J. N. 1992. Application of the Management of Digital Images to the Medical Field. In: D'Alleyrand, Marc. R.D., ed. Handbook of Image Storage and Retrieval Systems. New York, NY: Van Nostrand Reinhold.

Green, W. 1989. Digital image processing. New York, NY: Van Nostrand Reinhold.

Hibler, D. J. N., Leung, C. H.C., Mannock, K.L. & Mwara, N.K. 1992. A system for content-based storage and retrieval in an image database. In A. Jamberdino & N. Wayne (Eds.), Images Storage and Retrieval Systems: Proceedings of the International Society for Optical Engineering. Bellingham, WA: SPIE.

Huang, K-T. 1990. Visual interface design systems. In S-K. Chang, Principles of Visual Programming Systems, (pp. 60- 143).. Englewood, NJ: Prentice Hall

Jain, R. 1993. NSF workshop on visual information management systems. In: Niblack, W., ed. Storage and Retrieval for Image and Video Databases: Proceedings of the International Society for Optical Engineering; 1993 February 2-February 3; San Jose, CA. Bellingham, WA: SPIE-The International Society for Optical Engineering.

Janosky, B., Smith, P., & Hildreth, C. 1986. Online library catalog systems: An analysis of user errors. International Journal of Man-Machine Studies, 25, 573-592.

Jern, M. 1989. Visualization of scientific data. In: Purgathofer, W. & Schonhut, J. eds. Advances in Computer Graphics V; London: Springer-Verlag.

Kerlow, Isaac V.; Rosebush, Judson. 1986. Computer Graphics for Designers and Artists. New York, NY: Van Nostrand Reinhold Company; 1986. 298p. ISBN: 0-442-24712-5.

Lane, T. 1990. User interface software architecture (CMU-CS-90-101). Pittsburgh, PA: Carnegie Mellon University.

Lindstroem, B. 1980. Forms of representation, content and learning. Gotenberg, Sweden: Acta Universitatis Gothoburgensis.

Loomis, M. 1983. Data management and file processing. Englewood Cliffs, NJ: Prentice-Hall.

Lynch, C. 1988. The technologies of digital imaging. Journal of the American Society for Information Science, 42(8), 578- 585.

Mackay, W.E., Guindon, R., Mantel, M., Suchman, L. & Tater, D.G. 1988. Video data for studying human-computer interaction. In Proceedings of the Human Factors in Computing Systems. New York, NY: Association for Computing Machinery.

Markey, K. 1986. Subject access to visual resources collections: A model for computer construction of thematic catalogs. New York, NY: Greenwood Press.

Moore, C. 1981. User reaction to online catalogs: An exploratory study. College and Research Libraries, 42, 295-302.

Mostafa, J. 1994a. Digital image representation and access. In M.E. Williams (Ed.), Annual Review of Information Science and Technology, vol. 29.

Mostafa, J. 1994b. Design and analysis of a HCI for an image database. Ph.D. dissertation completed at The University of Texas at Austin.

Myers, B. A. 1992. State of the art in user interface software tools (CMU-C-92-114). Pittsburgh, PA: Carnegie Mellon University.

Myers, B.A. 1994. Challenges of HCI design and Implementation. Interactions, 1(1), 73-83.

Niblack, W., Barber, R., Equitz, W., Flickner, M., Glasman, E., Petkovic, D., Yanker, P., Faloutsos, C. & Taubin, G. 1993. NSF workshop on visual information management systems. In: Niblack, W., ed. Storage and Retrieval for Image and Video Databases: Proceedings of the International Society for Optical Engineering; 1993 February 2-February 3; San Jose, CA. Bellingham, WA: SPIE-The International Society for Optical Engineering.

Norman, D.A. 1988. The psychology of everyday things. New York, NY: Basic Books.

O'Connor, B. 1985. Access to moving image documents: Background concepts and proposals for surrogates for film and video works. The Journal of Documentation, 41(4), 209-220.

Orbach, B. 1990. So that others may see: Tools for cataloging still images. Cataloging and Classification Quarterly, 11(3/4), 163-191.

Ornstein, R. 1977. The Psychology of Consciousness. New York, NY: Harcourt.

Orr, J. N. 1990. Sorting through the GUIs. Computer Graphics World. 1990 June.

Renaud, P. 1993. Introduction to Client/Server Systems: A Practical Guide for Systems Professionals. New York, NY: Wiley & Sons.

Rohr, G. 1986. Using visual concepts. In S-K. Chang, T. Ichikawa & P. Ligomenides (Eds.), Visual Languages (pp. 325- 348). New York, NY: Plenum Press.

Rorvig, M. E. 1986. The substitutability of images for textual descriptions of archival materials in an MS-DOS environment. In H. Strohl-Goebl & K. Lehman (Eds.), Proceedings of the Second International Conference on the Application of Microcomputers in Information Documentation and Libraries. Amsterdam: North Holland.

Rorvig, M.E. 1993. A method for automatically abstracting visual documents. Journal of the American Society for Information Science, 44(1), 40-54.

Salton, G. 1989. Automatic text processing. Reading, MA: Addison-Wesley.

Screen Achievement Records Bulletin. 1976. Beverly Hills, CA: Academy of Motion Picture Arts and Sciences.

Seloff, G. A. 1990. Automated access to NASA-JSC image archives. Library Trends, 38(4), 682-696.

Shatford, S. 1986. Analyzing the subject of a picture: A theoretical approach. Cataloging and Classification Quarterly, 6(3), 39-62.

Shneiderman, B. 1992. Designing the user interface: Strategies for effective human-computer interaction. Reading, MA: Addison-Wesley.

Sless, D. 1981. Learning and visual communication. London: Croom Helm.

Springer, S. & Deutsch, G. 1981. Left brain, right brain. San Francisco: W.H. Freeman and Company.

Trenish, L.A. 1989. An interactive discipline-independent data visualization system. Computers in Physics, 3(4), 55-64.

Tufte, E.R. 1990. Envisioning information. Chesire, CT: Graphics Press.

Turner, J. 1990. Representing and accessing information in the stockshot database at the National Film Board of Canada. The Canadian Journal of Information Science, 15(4), 1-22.

United States Congress Office of Technology Assessment. 1990. Helping America compete: The role of federal and scientific and technical information (GPO 0-860-705). Washington, D.C.: U.S. Government Printing Office.

Wead, G. & Lellis, G. 1981. Film: Form and function. Boston, MA: Houghton Miffin.

Wentz, P. 1989. Computerization in museums: Databases to image bases. In M. E. Williams (Ed.), Proceedings of the 13th International Online Meeting. Medford, NJ: Learned Information.

Wiseman, N. & Earnshaw, R.A. 1992. An introductory guide to scientific visualization. Berlin: Springer-Verlag.

Appendix: Questionnaire

Please read each question carefully before starting your search. It is highly important that you show all the steps necessary to correctly answer each question. The accuracy of specific steps you perform to answer a question is as important as the final answer you find. You are reminded to verbally describe your steps as you attempt to answer each question.

1. Find the name(s) of the movie(s) in which one of the major actor is Peter Weller.

2. Find the names of two major actors in the movie Working Girl.

3. Find the name(s) of the movie(s) Ridley Scott directed.

4. Find the name(s) of the director(s) of the movie Return of the Jedi.

5. Victor Fleming directed a children's movie. Two subject terms were used (in the

database) to describe this movie. Name the two subjects covered in the movie.

6. Find at least three names of Science Fiction movies. Assume Science Fiction is a

subject term.

7. Can you verify if a movie has some coverage of both "Fire and Smoke" and

"Fight." Assume "Fire and Smoke" and "Fight" are subject terms.

8. Billy Dee Williams acted in a movie that you wish to see. Can you verify if this

movie contains any "Romance" scenes? Assume "Romance" is a subject term.