Journal of Southern Academic and Special Librarianship (2000)

ISSN: 1525-321X

Cataloging Expert Systems:
Optimism and Frustrated Reality

William Olmstadt
Medical Sciences Library
Texas A&M University

There is little question that computers have profoundly changed how information professionals work. The process of cataloging and classifying library materials was one of the first activities transformed by information technology. The introduction of the MARC format in the 1960s and the creation of national bibliographic utilities in the 1970s had a lasting impact on cataloging. In the 1980s, the affordability of microcomputers made the computer accessible for cataloging, even to small libraries. This trend toward automating library processes with computers parallels a broader societal interest in the use of computers to organize and store information. Following World War II, military personnel and scientists began to experiment with computers in all areas of their work, giving birth to computer science. As this field advanced, researchers began to wonder if computers might be programmed to simulate human intelligence. If so, the benefits could be extraordinary. These attempts to impart human reasoning and thought to computer software formed the basis for artificial intelligence (AI) studies, and dovetailed nicely with a growing interest in cognitive psychology.

Artificial intelligence research today focuses on several areas, including natural language processing, computer learning, computer vision, problem solving, and expert systems.1 It is the last of these that has received the most attention in librarianship in the last two decades. Expert systems were proposed and created for reference service, acquisitions, vendor management, and cataloging.2 Unfortunately, Sauperl and Saye3 show that the development of cataloging expert systems has practically ceased. This frenzy of publication about expert systems, followed by a rather sudden decline, warrants investigation.

The Nature of Expertise

Before discussing expert systems, it is necessary to review concepts of expertise. Cognitive psychology provides the most research on this topic. Ericsson and Charness4 provide a detailed review of the current research on expert behavior, and pose an alternative to traditional views of the development of expertise.

Defining expertise is a challenge for psychologists. The notion of superior performance interacts with constructs of intelligence and genetic inheritance of high ability, two other concepts that also pose problems. Ericsson et al. describe the two prevailing views of expertise. One simply attributes superior performance to length of time performing a task. This is a common-sense explanation, as most people would agree that extensive experience leads to high performance. The alternative explains expert behavior as the result of exceptional innate ability in a specific domain or area of knowledge. Supporters of this explanation cite the crucial need to recognize and encourage children with this high level of domain knowledge in order to ensure high performance as those children grow.

Ericsson and Charness4 reject these two explanations. They assert expertise is much more complex, and is primarily the result of deliberate practice on representative tasks in the domain. First, deliberate practice entails the individual wanting to perform the task and making effort to improve performance. This excludes amateur performance or simply “playing” as the origins of expert behavior. Amateurs and people performing the task for recreation typically only perform well enough to remain competent at the task. However, experts deliberately intend to exceed competence and actually improve their performance. Second, representative tasks are simply the most standard ones for a domain. Expertise, they write, is not the result of superior performance on unusual tasks in the domain, or tasks for which guidance already exists. For instance, research shows expert chess players excel at figuring out moves in the middle of a game, not at the beginning, since several standard texts exist detailing opening moves in chess. Therefore, the essence of expertise in chess stems from knowing standard ways to progress through the middle of a game. However, Ericsson et al. do not dispute that this deliberate practice must occur over a long period of time. The uniform finding from most research is that even low-level expertise takes a minimum of 10 years of full-time study and practice to develop. There are exceptions, but in most domains, 10 years is the minimum. Having reviewed some current models of expertise, one can gain a better understanding of expert systems (ES).

Expert Systems

There are many definitions of expert systems, and the literature in librarianship presents inconsistent definitions. An attempt to synthesize these definitions is necessary to achieve a clear understanding of the phrase.

Expert systems use human expertise (the result of deliberate practice on standard tasks over many years) to answer questions, pose questions, solve problems, and assist humans in solving problems. They do so by using inferences similar to those a human expert would make, to produce a justified, sound response in a brief period of time. When questioned, they should be able to produce the rules and processes that show how they arrived at the solution. This is the composite of many attempts to define ES in the literature, and reflects elements many authors felt important at the time.

The description of the components of an ES is even more divergent. Surveying the literature on ES over the past two decades produces many different descriptions of their parts. However, the one constant is the presence of a knowledge base where human expertise is transformed by system architects into rules (rules that can be used by the computer.) This knowledge base distinguishes ES from other AI applications. The number of rules in this component is variable, and dependent on the domain in which the ES is used. However, estimates range from 10,000 – 20,000 rules,5 with the maximum being in the most sophisticated ES designs. Holthoff1 concludes that ES performance is directly dependent on the size of this knowledge base.

Beyond the omnipresent knowledge base, there are components authors variably choose to include in describing the design of ES. The second frequently mentioned part is the inference engine. This represents the processes and actions the computer will perform on the rules in the knowledge base. These are extremely difficult to code, and there are several commercially available products, known as “shells,” for constructing ES that already include this inference engine.6 his inference engine works in one of two ways, backward chaining or forward chaining. Backward chaining ES start with a goal and prompt the user for information until arriving at a rule which will satisfy the user need.7 Forward chaining ES combine rules in the knowledge base according to user input and attempt to create a new rule that satisfies all the conditions.6 Carrington7 notes that simple expert systems are most efficient when using backward chaining.

Other authors2 5 include the user interface as one of the essential components of ES, although many authors mysteriously choose to omit this component. This interface is how the ES interacts with the user. Many ES described in the literature ask questions of the user, while others rely on a menu-driven interface.8 Carrington7 notes that while menu-driven interfaces may appear slow and unattractive to the user, their value lies in forcing the user to choose options that represent the knowledge in the ES. Finally, even fewer authors5 6 include a general database (or working storage) in the components of ES. This is the memory space the ES uses to hold the information about the current problem. It is from this working storage that the ES produces the rules and deductions it made to arrive at a conclusion.

It is apparent that there are many differences in what authors choose to include as part of an ES. Some authors9 only mention the knowledge base and inference engine, while Sauperl and Saye3 mention all four components. This could be the result of different audiences for their publications, but the effect is a lack of consensus on what constitutes an ES. In addition, authors disagreed as to the importance of the components. Jeng2 clearly states that the knowledge base is the center of any ES, while Schuegraf6 asserts that the inference engine is. This lack of consensus is even more apparent when discussing the prototype ES built for cataloging and the problems they encountered.

Cataloging Expert Systems

Having discussed expertise and expert systems, why is cataloging an appropriate arena in which to experiment with ES? The existing level of highly codified knowledge about cataloging10 11 is the most frequently cited reason for using ES in cataloging. The presence of AACR2, the MARC format, Library of Congress Rule Interpretations, OCLC manuals, and a number of other interpretive guides for cataloging form precisely the kind of knowledge which is transferable to the knowledge base of an ES. Additionally, the local practices of cataloging lend themselves to formalization in rules that can then be input into an ES to enhance local utility. Cataloging is also a small domain, and ES must focus on a limited domain in order to be effective.7 12

There appear to be five major cataloging expert system prototypes reported in the last 20 years. Each of these was implemented with varying degrees of success, although none were actually used in the daily work of catalogers. However, the researchers on these projects had different goals for their systems, so success in the traditional sense is difficult to gauge. Clarke and Cronin13 predict two paths of development for cataloging ES. The first path views ES as an intermediary and an assistant with much of the intellectual work still left to the human. The second path views ES as completely automating the cataloging process, from publication to shelf preparation, with no human intervention. It should come as no surprise that these prototypes succeeded exclusively in terms of the first path.

University of Exeter

The most commonly cited ES for cataloging, and the first ever reported, according to Hjerppe and Olander,8 was the work of Roy Davies and Brian James at the University of Exeter in England in 1984. Brian James developed a menu-driven ES with the goal of producing a complete cataloging record (catalog card or computer file) for an item.14 Their work was pioneering, but encountered problems chiefly related to inadequate technology at the time.3 Their equipment did not have the capacity to store the entirety of AACR2, and was taxed to store the few chapters their project successfully used. Their work was also the first to deal with the complexity of using AACR2 as a knowledge base for cataloging ES. Nevertheless, Brian James earned a M. Sc. for his efforts. These two authors have never again published together, and it appears that the system they envisioned was discontinued.

Linköping University in Sweden

Following on the heels of the Davies and James experiments, Hjerppe and Olander8 reported on the creation of two ES for cataloging. Their project, titled ESSCAPE (Expert Systems for Simple Choice of Access Points for Entries), produced two ES that were viable in the laboratory at Linköping University. Both were built with ES “shells” commercially available at the time, EMYCIN and ExpertTrees. Both shells resulted in different products, however. ESSCAPE/EMYCIN was developed with the intent of producing a complete cataloging record, while ESSCAPE/ExpertTrees simply resulted in advice on which AACR2 rule should be used.

EMYCIN is a shell with a backward chaining inference engine, the top goal of which is a complete bibliographic record. The prototypes did not use ISBD punctuation, but the authors noted it would be simple to program the ES to do so. Users are cued for certain information about the item in hand and ESSCAPE/EMYCIN produced a record based on the input. The authors noted several limitations, however. This ES could not properly handle works of mixed responsibility, always treated collections as if they had a collective title, always treated the first author as the principal author, and did not consider the forms of the headings in the record (performed no authority work). However, since this ES was intended to produce a complete cataloging record, it is closer to Clarke and Cronin's13 second path for ES development than the other prototypes.

ExpertTrees was not a true ES. For instance, it lacked the ability to show how it arrived at its conclusions. However, ExpertTrees was designed to run on IBM PCs, which made it relatively affordable. This system was constructed using a matrix of decisions in a spreadsheet, with the farthest right column of the spreadsheet being the conclusion the ES would produce based on the answers in the row to its left. Users answered a series of questions in a menu format about the item in hand until ExpertTrees decided it had enough information, based on the matrix of possibilities, and suggested the appropriate AACR2 rule to apply to the item. In essence, as users answered the menu queries, ExpertTrees found the appropriate row in the matrix, across which it progressed until it needed no further information and produced the answer. This is certainly an example of the more assistive nature of cataloging ES which Clarke and Cronin13 describe in their first path.

Hjerppe and Olander8 stated that they were no longer pursuing development of these two ES as of 1985, and have instead moved to devising authority control ES. Reports of this project, however, have never been published.

CATALOG AID

Roy Chang12 describes a simple ES he constructed to complete a degree in computer science. Chang constructed his system from a computer language called Prolog, rather than a shell. Prolog is a symbolic computer language, where statements are represented in terms of their actions and the agents. The statement “Smith catalogs books,” for example, is expressed in Prolog as catalogs (smith, books). The author presents a thorough analysis of Prolog that is outside the scope of this paper. It is relevant, however, that his system was more laborious to construct, since he not only transferred part of AACR2 to the knowledge base, but also wrote the inference engine with Prolog. As with the other three systems, users are cued to enter information, this time in yes/no format, rather than from a menu, to which the ES replies with the correct AACR2 rule to apply. Again, this is an assistive ES in Clarke and Cronin's13 context, and very much like ExpertTrees8 in terms of the user interface. Also, as with the other three ES, this one was never used in practice, and at one point Chang himself appears to condemn it as useless unless it were set up to handle the exceptions to what catalogers do, rather than the regular daily work. Chang concludes by noting the processor-intensive nature of ES, which echoes concerns from the other three projects. Although Chang has constructed a prototype system, his work is rarely ever cited again. One wonders if the publication being in a state library journal is a possible cause.

MAPPER

Ercegovac and Borko15 describe a prototype ES for map cataloging according to AACR rules. Cataloging maps is a very narrow domain and suits the limitations of ES well. MAPPER is another example of Clarke and Cronin's13 first type of ES, in that it is menu-driven and assists the user in choosing the main entry, title statement, statement of responsibility, publisher, place, and year of publication for a map.

These researchers made strides in designing and testing ES for cataloging, in that they included knowledge from map catalogers as well as AACR2 in the knowledge base, and they formally tested the system for ease of use and accuracy. Determining usability was done by having many students test MAPPER and evaluate it on a standard scale, and accuracy was checked by comparing MAPPER-produced answers to the decisions of the Library of Congress made when cataloging identical maps. The tests for accuracy and usability were not performed on any of the four systems previously described, and represent an improvement in the construction of ES for cataloging3 Ercegovac earned a Ph. D. for her role in the MAPPER project, but MAPPER is seldom mentioned again in literature in cataloging and technical services.

These five prototypes represent a decade of enthusiasm about the use of ES in cataloging. However, Meador and Wittig's prediction that “a workable expert system...will probably become ready sometime during the 1990s”10 simply did not happen. There are many reasons for this failure, some a product of the research, others observations by this author.

Barriers to Implementation

There are many potential barriers in any sort of system design project. Some rest with the individuals who will be using the system, who often show resistance when their ways of doing things are changed. Most of the problems with ES implementation for cataloging, however, appear to be a result of the nature of the task of cataloging.

Problems with Total Automation

In practice, cataloging items often results in changes during the process of cataloging. There are recognitions of the need for a special local call number, the need to check a series authority record, and various other problems. This makes cataloging a nonlinear process. However, the reported prototypes of ES appear to treat the decision-making as linear. This is particularly evident in the menu-driven systems. What is the user to do when special problems are recognized halfway through the use of ExpertTrees, for example? Back out and start over again? Running the entire ES again after having identified a problem could take more time than not using the ES in the first place, which is completely contrary to the efficiency goals of ES. While Hjerppe and Olander8 do say that some exits were built into their systems, this incapacity to handle the nonsequential nature of cataloging was surely a barrier to implementing ES for cataloging. And Clarke and Cronin13 do not address the problem of what happens in their second view of ES (total automation) when problems are encountered during the process. This is the kind of situation in which human judgment, or perhaps even physical intervention, is necessary. Unfortunately, ES are not good at dealing with ambiguity.

Problems with Cataloging Expertise

Ericsson and Charness4 note the difficulty in constructing the knowledge base of any ES. While experts show consistently superior performance, they are almost uniformly very poor at describing how they achieved that performance. This makes the transformation of their expertise into the rules for a knowledge base very difficult. Many authors2 8 11 cite creating an adequate knowledge base as the most difficult part of creating an ES. Until recently, very little research existed about cataloging expertise, and how it might best be transferred to the components of an ES. Jeng and Weiss16 present research done at the National Agricultural Library cataloging department that was expressly designed to investigate the nature of cataloging expertise. Their use of unstructured interviews, content analysis of internal documentation, and verbal reports from catalogers produced a model of expertise with four parts: searching bibliographic databases; determining access points; interpreting bibliographic data and rules; and identifying and prioritizing special problems. These are the areas at which expert catalogers excel, according to Jeng and Weiss. Presumably, eliciting expert knowledge in each of these four areas would improve the knowledge base of an ES.

Jeng and Weiss16 represent one of the first real attempts to address expertise the way Ericsson and Charness4 view it. The existence of AACR2 and the other codified knowledge of cataloging were useful for prototypes, but AACR2 rules are not equivalent to expertise. However, this is the assumption most ES researchers have made for the last two decades: if all of AACR2 is in the knowledge base of an expert system, or at least in parts,3 expert systems will truly represent the expertise of human expert catalogers. This is similar to saying that all expert catalogers know and use in their daily work is AACR2.

Obviously, this is specious reasoning, and Hjerppe and Olander8 did try to point out the problems with AACR2 as a knowledge base. They analyzed the structure of one AACR2 chapter in terms of typesetting and presentation in outline form, and found it to be severely lacking. They contend that too much of cataloging is not based on AACR2, but rather on what the cataloger knows about interpreting AACR2. Sauperl and Saye3 echo this in mentioning that this skill at interpreting AACR2 is what cataloging instructors still spend most of their time teaching in library school. Jeng and Weiss16 present research that represents an attempt to overcome this barrier to implementing ES for cataloging, yet more recent research on cataloging expertise is lacking.

Additionally, Ericsson and Charness4 caution against overvaluing the “social expert,” someone who is presumed to be an expert but lacks the psychologically testable criteria for being one. In small libraries, it is easy to see how one person with knowledge about cataloging may be perceived as the expert, when really, that individual may have simply been doing cataloging longer than the other employees. Even the research on cataloging expertise is questionable, as it almost always relies on measures of expertise simply based on length of time cataloging. Based on these measures, Ericsson and Charness4 would not agree with Jeng and Weiss'16 assessment of expertise. No attempts were made to show that the catalogers deliberately practiced improving their performance or that the tasks about which the catalogers were interviewed were standard tasks. Unfortunately, this lack of awareness of the current thinking about expertise in psychology is a serious deficiency in library and information science research on ES. It appears as though most authors in the field use expertise in a less stringent sense, and this can lead to varying interpretations of expertise, and as this paper has shown, a fair amount of chaos in the literature. Still, efforts to delineate the nature of expertise in cataloging are important, and a step in the right direction in ES research.

Problems with Priorities

Bailey, cited in Jeng,2 notes that in the 1991 Association of Research Libraries survey, only 6% of respondents were developing ES in their libraries. When asked why, 61% claimed they had higher priorities. This does not bode well for the use of ES in libraries on a regular basis. The Association of Research Libraries is composed of some of the wealthiest and most technologically sophisticated libraries in the United States and Canada, and these are really the only places where ES could be tested and researched, outside of computer labs and private industry. Public libraries, small corporate information centers, and other poorly funded information centers are not able to afford the equipment and technology to support an ES, particularly when these institutions already struggle to keep pace with technological advances. Bailey's findings show even at the beginning of this decade, there was declining interest in ES for libraries in general. Not coincidentally, this is the time the reported literature on ES became almost nonexistent. This lack of interest, bordering on apathy, by the only organizations in librarianship that could have feasibly afforded ES, is a major barrier to implementation. No one will pay for and develop systems that are expensive and have little immediate payoff. It is likely that the problems catalogers now face with web resources, electronic database licensing issues, and an explosion of multimedia formats have supplanted the concern for using ES in cataloging. After all, it is difficult to construct an expert system when many decisions about new formats have yet to be made.

Problems with System Design

As computer hardware and software develops, the systems with which the first ES were designed rapidly became obsolete. The literature on ES in cataloging, the bulk of which is from the 1980s, still talks about using IBM PCs to design ES. The MAPPER project was built on Hyper Card for the Macintosh. Since OCLC, the premier bibliographic utility in the nation, does not support cataloging on Macintosh platforms, one wonders why ES would be constructed on incompatible platforms when they might need to interact with such services. This highlights some of the constraints present in building any system. Obviously, availability is a factor. All of these prototype systems were designed in academic institutions, and all but the ESSCAPE projects were used as the professional work to earn an academic degree. Academic institutions often lack the funding to purchase the latest software, including the shells to construct ES. Carrington7 reported a range of prices for these shells from $95 to $495.

Time is a constraint, too. As Borko17 notes, people trying to complete an academic degree do not have years to program the knowledge base of an ES for cataloging. Existing knowledge is also a limitation. Although Carrington7 and Hjerppe and Olander8 reported that even inexperienced librarians could construct ES with these shells, many librarians at the end of the 1980s lacked the level of technological sophistication these authors exhibit.

Conclusions

The abrupt end to almost 25 years of cataloging expert systems research is the product of several factors, including

It has been shown that the literature on cataloging expert systems in library and information science is inconsistent, which did not aid the quest to implement them in 1990s. Researchers were not able to agree on the fundamental components of expert systems, a definition of expert systems, what constitutes expertise in general and in cataloging, or how best to go about constructing the systems. There was little focus on a unified vision for cataloging with expert systems, with one camp of researchers proposing total automation and others proposing more assistive ES roles. No happy medium was reached. As web resources exploded in this decade, catalogers were overwhelmed with planning how to treat these formats, producing such codes as the Dublin Core. Cataloging ES research fell by the wayside, and its continuation does not look promising.

As Morris18 remarked, expert systems research “experienced periods of exponential growth, promotional hype, and subsequent disillusionment.” It is hoped that if cataloging expert systems research actively resumes in the future, it is reborn with a greater sense of common purpose and a more rigorous study of expertise. It is not necessary to reinvent the wheel.

1. Tim Holthoff, “Expert librarian applications of expert systems to library technical services,” Technical Services Quarterly 7, no. 1 (1989): 1-16.

2. Judy Jeng, “Expert system applications in cataloging, acquisitions and collection development: a status review,” Technical Services Quarterly 12, no. 3 (1995): 17-28.

3. Alenka Sauperl and Jerry Saye, “Pebbles in the mosaic of cataloging expertise: what do problems in expert systems for cataloging reveal about cataloging expertise?,” Library Resources and Technical Services 43, no. 2 (1999): 78-94.

4. K.A. Ericsson and Neil Charness, “Cognitive and developmental factors in expert performance,” in Expertise in Context: Human and Machine, ed. P.J. Feltovich, K.M. Ford, and R.R. Hoffman (Menlo Park, CA: AAAI Press/The MIT Press, 1997), 1-41.

5. Barbara Anderson, “Expert systems for cataloging: will they accomplish tomorrow the cataloging of today?,” Cataloging and Classification Quarterly 11, no. 2 (1990): 33-48.

6. Ernst Schuegraf, “A survey of expert systems in library and information science,” Canadian Journal of Information Science 15, no. 3 (1990): 42-57.

7. Bessie M. Carrington, “Expert systems: power to the experts,” Database 13 (1990): 47-50.

8. Roland Hjerppe and Birgitta Olander, “Cataloging and expert systems: AACR2 as a knowledge base,” Journal of the American Society for Information Science 40, no. 1 (1989): 27-44.

9. Gary Orwig and Ann Barron, “Expert systems: an overview for teacher-librarians,” Emergency Librarian 19 (1992): 19-21.

10. Roy Meador and Glenn Wittig, “Expert systems for automatic cataloging based on AACR2: a survey of research,” Information Technology and Libraries 7 (1988): 166-171.

11. P.F. Anderson, “Expert systems, expertise, and the library and information professions,” Library and Information Science Research 10 (1988): 367-88.

12. Roy Chang, “Developing a cataloging expert system,” Illinois Libraries 72 (1990): 592-596.

13. Ann Clarke and Blaise Cronin, “Expert systems and library/information work,” Journal of Librarianship 15 (1983): 277-292.

14. Roy Davies and Brian James, “Towards an expert system for cataloging: some experiments based on AACR2.,” Program 18 (1984): 283-97.

15. Zorana Ercegovac and Harold Borko, “Design and implementation of an experimental cataloging advisor-Mapper,” Information Processing and Management 28 (1992): 241-57.

16. Ling Hwey Jeng and Karen B. Weiss, “Modeling cataloging expertise: a feasability study,” Information Processing and Management 30, no. 1 (1994): 119-129.

17. Harold Borko, “Getting started in library expert systems research,” Information Processing and Management 23, no. 2 (1987): 81-87.

18. A. Morris, “Expert systems for library and information services - a review,” Information Processing and Management 27, no. 6 (1991): 713-724.


Citation Format

Olmstadt, William. (2000). Cataloging Expert Systems: Optimism and Frustrated Reality. Journal of Southern Academic and Special Librarianship: 01 [iuicode: http://www.icaap.org/iuicode?62.01.03.03]