What Happened With Expert Systems?

Author:Murphy  |  View: 24557  |  Time: 2025-03-22 21:55:50

The cause of their demise could surprise you.

Young people may not be familiar with the term, but "Expert Systems" was a major technological initiative during the 1980s and 1990s of the last century.

I got my 30y full-time job precisely because of Expert Systems.

That was in 1989, and I had just finished my PhD. A friend told me about an opening for PhDs who knew something about AI and, ideally, Expert Systems.

You know, during the 1980s and 90s, there was what is called "an AI winter," during which AI projects struggled to find funds and support. AI was not at all the "cool" technology it is today.

However, my future workplace had signed a contract with an industrial group to implement Expert Systems to improve operations at its factories. They were in dire need of project leaders with a PhD related to such an exotic topic as Expert Systems.

Long story short, I was hired and started working on an Expert System project for a salt factory in the Gulf of Mexico.

But what are Expert Systems?

Expert Systems (ES) was a technology for applying a set of "rules" to a given situation. The components of an ES are the following:

  • Working memory: A collection of variables with their associated values at a point in time. Often, the variables' value was taken from instrument measurements; they were considered "facts."
  • Rules: They had the structure "IF THEN ," where could be used to modify a variable's value.
  • Knowledge Base: the collection of rules in a format proposed by the software vendor of the developing environment (there was no detailed standard for this).
  • Inference Engine: The reasoning mechanism for applying the knowledge base to the working memory. There are two flavors: "backward chaining" and "forward chaining;" the first one takes the facts in working memory and tries to find a rule with a condition (left part) evaluated to "true." The second goes "in reverse," from conclusions (like symptoms) to causes (like diseases).
  • Knowledge Acquisition environment: Interface for the developer to create and edit the rules, define variables in the working memory, etc.
  • User interface: interface for the final user; the developer defines its specifics.

How did Expert Systems become popular?

The promise of capturing human experts' knowledge sounded alluring to technology enthusiasts at the time. It's not only about doing one expert's job at one place—an ES could be replicated in many places, multiplying the expert's value!

Of course, the devil is in the details, and later, I will discuss the mundane miseries of real ES development. But let's keep the dream alive for now.

Some early experimental developments amazed the computer science community, like Mycin (see reference below), an ES for diagnosing and recommending treatment for infectious diseases developed at Stanford. It used backward chaining reasoning.

An academic journal called "Expert Systems with Applications" was created and became very important instantly; I was a reviewer for some articles submitted there.

As is usual in these cases, there were incentives to promote the hype about their potential: Stanford made the Mycin announcement as impactful as possible, and then the news outlets found a good story to keep the hype cycle moving.

Another reason for ES's popularity was the hardware: PCs were becoming extremely popular, and ES had low computational requirements, so it could run on regular PCs, even if they could look feeble by today's standards. The ES of my salt factory project ran on a standard PC.

It looked like anyone could have an Expert System. What a future it was!

What was it like to develop an Expert System?

In a word, developing an ES was tough. Very much so.

You see, in practice, it wasn't evident how to obtain knowledge from the human expert and then figure out how to squeeze it into a knowledge base composed of computational rules.

ES development methodology was called "Knowledge Engineering," and there were courses and certifications for it. I took one of them.

However, even armed with methodologies and certifications, ES development often encountered hurdles such as sabotage.

Yes, you read right. ES development was often sabotaged by the experts themselves, who pretended "not to be available" or something of the sort so the ES couldn't be successful. Put yourself in their shoes: "Am I supposed to give this damn system MY knowledge so I can be replaced by a computer? No way!"

In the case of my project for the salt factory, the expert was very cooperative, and for good reason: he was about to retire, so he didn't care about the prospect of his replacement by a machine. Actually, his retirement proximity had been the reason for developing the ES in the first place, as he was the only person who could operate the factory at a high level of efficiency. When the administration found out he was going to retire, they panicked.

But even with cooperative experts, one fundamental problem for developing ES was that the knowledge had to be explicit to begin with. Our expert, on the other hand, managed to face an operation crisis at the factory but wasn't able to put his reasoning into words. Even worse, he often didn't know what he knew or what he ignored about the factory operation.

You could rightfully think, "What a mess!" but this was the miserable reality of ES development, which contrasted with ES's elegant architecture and principles.

To put this in fancier words, the hurdles for ES were called the "Knowledge acquisition bottleneck."

Then, what happened with Expert Systems?

Why we don't hear about ES anymore? Weren't they efficient? Weren't they useful?

They were both efficient and useful—when you managed to build them right in the first place. However, most ES projects were drowned out due to the knowledge acquisition bottleneck.

Don't take me wrong: Expert Systems was the first AI-related technology that became a truly commercial hit. Entire companies were created to sell environments for ES development. For the salt factory, we used "Level5," which I liked for the most part.

My take is that if ES were the only option at this point in time, they would still be used despite the horrible hurdles of their development. Not everybody agrees on this, though: some think that ES never delivered on their promise.

I can testify, from my personal experience, that at least some ES resulted in successful projects.

The ES solution was also too simplistic for many domains. Take, for instance, legal Expert Systems, which were the subject of many (failed) projects and received a lot of funding (see reference below).

But something else entered the scene.

Machine Learning took over.

Around the year 2000, Machine Learning (ML) gained a lot of popularity, and it wasn't anymore a matter of research in hidden labs. With tools like Python, Jupyter Notebooks, and, above all, cheap data storage, the alternative of using lots of data to guide decision-making in companies got traction.

Machine Learning started to be used in many practical applications, like credit risk assessment, preventive maintenance, operations optimization, and much more. In short, it started replacing many of the former candidates for Expert System development.

One deciding factor was that many companies were already capturing and storing large amounts of data about their operations and financials, so the prospect of using that data profitably was a no-brainer.

Compare what we needed to do in the two options (ES and ML):

  • ES: Interview experts. Encode their knowledge into rules, run simulations, and detect limitations. Repeat.
  • ML: Gather data, clean it, train standard models (like Random Forest), evaluate performance, and repeat.

Even though some steps in ML are not trivial, like cleaning the data, when we compare that with the prospect of interviewing a human expert and making him/her elicit some knowledge he/she doesn't even know he/she has in the first place, the balance turns easily in favor of ML.

In the end, price comparisons went the way of ML, while ES went the way of the Dodo.

Closing thoughts

There are some indications that the hype around ES was too much compared to reality, but this is usual in new fields, as they need some attention and funding in order to get traction.

AI has had this need for hype from the very beginning: John McCarthy told me personally (IRL, I mean) that several possible names were considered for the new discipline at the time, and some researchers opposed the "Artificial Intelligence" one. However, as McCarthy recounted, "I needed funding," and the AI name sounded way more striking than the alternatives, so he personally chose it. Oh, and he got the funds.

By the way, the journal Expert Systems with Applications still exists, though it almost no longer publishes about ES.

But when the hype is not met by real results, it backfires – as has happened several times already. It happened for Expert Systems, and then Machine Learning delivered the final blow.

References

Buchanan, B. G., and Smith, R. G. "Fundamentals of expert systems." Annual review of computer science 3.1, pp.23–58, 1988.

Leith P., "The rise and fall of the legal expert system," in European Journal of Law and Technology, Vol 1, Issue 1, 2010.

Davis, R., Axline, S. G., Buchanan, B. G., Green, C. C., & Cohen, S. N., "Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system." Computers and biomedical research 8.4, pp.303–320, 1975.

Get AI news analysis with my short free newsletter at https://rafebrena.substack.com/

Tags: AI Artificial Intelligence Editors Pick Expert Technology

Comment