top of page

The First AI Winters 1974 – 2000

Writer's picture: Giuseppe CavaleriGiuseppe Cavaleri

Updated: May 8, 2024

Looming vector constructed lines like a light painting in a winter scene. Barren trees among semi-barren evergreens evoking the 1980s.
Generated by MidJourney v6. Prompted by me.

Before the last AI winter, the big hype was for something called expert systems. The idea is simple: interview experts> find the rules they use> encode them into a program. If-then statements/ Boolean strings then replace the expert. They were the all the rage in from the mid-70’s to 1980 though they limped along in to the 1990’s. https://www.sciencedirect.com/science/article/abs/pii/037872069500023P

 

Expert systems basically act like Figure 1 below:

Figure 1 from https://nbpublish.com/library_read_article.php?id=37019 (Use Chrome’s translator)

The stereotypical example of how they'd be used was industrial maintenance and repair. If this pipe knocks like this, but this dial says pressure is normal, then do ___ etc. Expert systems would output information only gained by long experience and likely not written down.

 

Either the business leaders who bought in to the hype ignored, or more likely weren’t aware of expert systems rigidity. The lack of plasticity in these systems meant big loses. If the engineering field changed, expert systems outputs to queries were no longer relevant. Or if they there relevant no one capable of understanding why they were relevant were around to describe why.

 

Fearless, businesses assured that every organization would have the opportunity to conduct interviews with their experienced senior staff members, extract valuable insights on intricate systems, and subsequently release them. This would eliminate the need to pay exorbitant rates for retired individuals to provide consultation services, among other things.

 

There were glowing articles written about how to approach this. Here’s one from 1959 titled Reasoning Foundations of Medical Diagnosis: Symbolic logic, probability, and value theory aid our understanding of how physicians reason. The first paragraph:

 

“The purpose of this article is to analyze the complicated reasoning processes inherent in medical diagnosis. The importance of this problem has received recent emphasis by increasing interest in the use of electronic computers in aid to medical diagnostic processes (1,2). Before computers can be used effectively for such purposes, however, we need to know more about how physicians made a medical diagnosis.” https://www.science.org/doi/10.1126/science.130.3366.9

 

I used a medical example rather than an engineering example for the next article in this series.

 

Exit interviews turning into 6 month long contractual interrogations to train the AI that'd replace them.

 

The envisioned future entailed the implementation of an expert system to safeguard institutional knowledge, alleviate concerns surrounding employee turnover, and avoiding awkward situations attempting to re-secure old employees who likely moved on in the more favorable job markets of the past.

 

Large corporations would have the ability to downsize at will, operate with minimal staff, and avoid rehiring former employees as consultants. This concept garnered significant interest from both companies and government entities.

 

Some findings published in a 2019 edition of the International Journal on Medical Informatics: The seven key challenges for the future of computer-aided diagnosis in medicine https://www.sciencedirect.com/science/article/pii/S1386505619300632

 

  1. Expert systems have superficial knowledge, and a simple task can potentially become computationally expensive.

  2. Expert systems require knowledge engineers to input the data, data acquisition is very hard.

  3. The expert system may choose the most inappropriate method for solving a particular problem.

  4. Problems of ethics in the use of any form of AI are very relevant at present.

  5. It is a closed world with specific knowledge, in which there is no deep perception of concepts and their interrelationships until an expert provides them.

 

When it turned out that expert systems couldn’t deliver all they promised, that nested if-then statements couldn't save money like promised; the first AI Winter fell.

 

Though to many it feels like AI spring with brighter days ahead and flowers ready to bloom, the breeze carries with it a sting of an accelerated winter.


In November 2023 it was reported Meta’s program ESMFold used a model originally designed for decoding human languages to make accurate predictions for proteins that might lead to new drugs, characterize unknown microbial functions, and trace the evolutionary connections between distantly related species. These findings would be added to the open-source database ESM Metagenomic Atlas. https://esmatlas.com/

 

Earlier that year Facebook/ META disbanded the team in to pivot towards commercial AI. The subtitle for the Financial Times article stings: "Prioritizing moneymaking AI products over blue-sky research." https://www.ft.com/content/919c05d2-b894-4812-aa1a-dd2ab6de794a


Sure. Some argue Google/ Alphabet are ahead of the game with Lasker award winning team members on AlphaFold, so risk averse META chose to cut its loses. https://erictopol.substack.com/p/a-new-precedentai-gets-the-american  

 

I’d argue top dog won’t always remain so. ChatGPT has notable performance issues now vs. when it was first released to the public as 3.0. Noticeable enough long before July 2023. https://www.popsci.com/technology/chatgpt-human-inaccurate/


Plus, META spent $46.5 Billion on the MetaVerse. I question how they define risk and determine loses to cut. https://fortune.com/2023/10/27/mark-zuckerberg-net-worth-metaverse-losses-46-billion-earnings-stock/

Dead eyed Mark Zuckerberg from The Metaverse can't hurt you, however his Metaverse avatar is unnerving. Like it's trying to display welcoming vibes but can't muster it.

 

AI is a force multiplier for good. ...in the hands of people guided by ethics to do good AND know the technologies limitations. Example in Nature from last December when a neural network described a new class of antibiotics that could keep staphylococcus aureus at bay a little longer: https://www.nature.com/articles/d41586-023-03668-1


I’ve seen too many sloppy implementations in the world. In a week I’ll shed light on only a few.

 

Where do I find the authority to say anything at all about AI? Attending UCSF’s first Informatics Day in July 2014 was a formative moment. https://ctsi.ucsf.edu/news/ucsf-informatics-day-highlights-growing-enthusiasm-data 


A year later Google's Deep Dream would be announced to the public. https://en.wikipedia.org/wiki/DeepDream


Kept my finger on the pulse ever since.


Last year was a monumental year for AI. This year shall be too. It's never been so widely available nor easily accessible.

Meme using Dante Gabriel Rossetti's (1828-1882) painting "Pandora" (1871) gripping her gift from the gods: A box that when opened would release terrible things in to the world. Meme text reads: "Hey, it's me - Pandora. Welcome to my new unboxing video."
Dante Gabriel Rossetti's (1828-1882) painting "Pandora" (1871)

I wonder where we'll be come winter. One thing for sure is we'll all find out together.





 

Edited 5/8/2024 to add this post by Amy Bruckman about surviving AI Summer 2024 who worked in artificial intelligence back in 1995:

Posted April 26th, 2024 on Medium by Amy Bruckman: Surviving the AI Summer. Click to read more
Click to read more

Comments


bottom of page