As research on expert systems has moved well into its second decade, it
has become popular to cite the limitations of the phenomenologic or
associational approach to knowledge representation that was typical of
first generation systems. For example, the Internist-1 knowledge base
represents explicitly over 600 diseases, encoding associated disease
manifestations (signs, symptoms, physical findings, and lab
abnormalities) but failing to deal with the reasons that those findings
may be present in the disease [Miller, R. A. 82]. In recent years
Pople has sought to add detailed causal models to the knowledge base in
a revised version of the program known as CADUCEUS [Pople 82].
Similarly, a typical production rule in the MYCIN system states
inferences that may be drawn when specific conditions are found to be
true [Buchanan 84], but the underlying explanations for such
relationships are not encoded. Clancey has argued that MYCIN needs such
"supporting knowledge" represented, especially if its knowledge base is
to be used for teaching purposes [Clancey 83]. By the late 1970s,
artificial intelligence researchers were beginning to experiment with
reasoning systems that used detailed mechanistic or causal niodels of
the object being analyzed. Among the best early examples were a program
to teach students how to analyze electronic circuits [Brown 82] and a
system for diagnosing problems with mechanical devices [Rieger 76].