The history of AI in 33 breakthroughs: the first expert system

In the early 1960s, computer scientist Ed Feigenbaum became interested in “creating models of the thought processes of scientists, particularly the processes of empirical induction by which hypotheses and theories were deduced from data”. In April 1964, he met geneticist (and Noble Prize winner) Joshua Lederberg who told him how experienced chemists use their knowledge of how compounds tend to break down in a mass spectrometer to make inferences about structure. of a compound.

Recalling in 1987 the development of DENDRAL, the first expert system, Lederberg remarked“…we were trying to invent AI, and in the process we discovered an expert system. This paradigm shift, “that knowledge is power”, was explained in our 1971 paper [On Generality and Problem Solving: A Case Study Using the DENDRAL Program]and has been the banner of the knowledge-based systems movement within AI research from that point on.

Expert systems represented a new stage in the evolution of AI, moving from its initial focus on general problem solvers to focusing on expressing human reasoning in code, i.e. drawing inferences and come to logical conclusions. The new emphasis was on knowledge, in particular the knowledge of experts specializing in a (narrow) domain and more specifically their heuristic knowledge.

Feigenbaum explained heuristic knowledge (in his 1983 talk “Knowledge engineering: the applied side of artificial intelligence“) as “a knowledge that constitutes the rules of expertise, the rules of good practice, the rules of judgment in the field, the rules of plausible reasoning… As opposed to the facts of the field, its rules of expertise, its rules of good estimation, are rarely written down.

Pamela McCorduck in It Might Matter: My Life and Times with the Artificial Intelligentsia2019:

“In 1965, Feigenbaum and Lederberg assembled a superb team, including philosopher Bruce Buchanan and later Carl Djerassi (one of the “fathers” of the birth control pill) as well as brilliant graduate students who would go on to make their own mark in IA. The team began to study how scientists interpreted the output of mass spectrometers. To identify a chemical compound, how did an organic chemist decide which of several possible pathways would be more likely than others The key, they realized, is knowledge – what the organic chemist already knows about chemistry Their research would produce the Dendral program (for the dendritic, tree-like algorithm exhibiting spreading roots and branches) with fundamental assumptions and techniques that would completely change the direction of AI research.

The experience with DENDRAL informed the development of the Stanford team’s next expert system, MYCIN (the common suffix associated with many antimicrobial agents), designed to help doctors diagnose bloodstream infections. Feigenbaum used MYCIN to illustrate the different aspects of knowledge engineering, stating that expert systems must explain to the user how they arrived at their recommendations, “otherwise the systems will not be credible to their business users”.

As has happened again and again with new breakthroughs throughout AI history, expert systems have generated a lot of hype, excitement, and false predictions. Expert systems were “the big thing” in the 1980s and it was estimated that two-thirds of Fortune 500 companies applied the technology in their day-to-day business operations, ending with “AI winter” in the late 1980s. 1980.

Already in 1983, Feigenbaum identified the “key bottleneck” that led to their eventual demise, that of scaling up the knowledge acquisition process: “Knowledge is currently acquired in a very painstaking way reminiscent of cottage industries, in which individual computer scientists work. with individual experts in disciplines thoroughly to explain heuristics. In the decades to come, we must have more automatic means to replace what is currently a very tedious, time-consuming and expensive procedure. The problem of knowledge acquisition is the main bottleneck problem of artificial intelligence.

Automation of knowledge acquisition eventually took place, but not via the methods envisioned at the time. In 1988, members of the IBM TJ Watson Research Center published “A statistical approach to language translationannouncing the shift from rule-based machine translation methods to probabilistic methods, and reflecting another shift in the evolution of AI towards “machine learning” based on statistical analysis of known examples, and not on understanding and “understanding” the task at hand.

And while knowledge for Feigenbaum was the heuristic knowledge of experts in very specific fields, knowledge became, especially after the advent of the Web, any digitized entity accessible on the Internet (and beyond) that could be extracted and analyzed by machine learning, and over the last decade, by its more advanced version, “deep learning”.

In his personal history of the development of DENDRAL in 1987, Lederberg wrote of Marvin Minsky’s critique of the build and test paradigms, that for “any problem worth the name, searching through all possibilities will be too inefficient for practical use”. Lederberg: “He had in mind to play chess with 10^120 possible paths of movement. It is true that equally intractable problems, such as protein folding, are known in chemistry and other natural sciences. These are also difficult for human intelligence.

In November 2020, DeepMind’s AlphaFold model, a deep learning system designed to identify the three-dimensional structures of proteins, achieved remarkably accurate results. In July 2022, DeepMind announced that AlphaFold could identify the structure of some 200 million proteins from one million species, covering just about every protein known to humans.

Comments are closed.