- 
  
  Alan Turing introduces the "Turing Test" in his paper "Computing Machinery and Intelligence."
- 
  
  Dartmouth Conference establishes AI as a field of study.
- 
  
  Development of symbolic AI and rule-based systems; funding and interest in AI grow globally.
- 
  
  Decline in interest as AI systems struggle with real-world complexity.
- 
  
  Despite progress in theory, AI systems struggle to solve real-world problems due to their dependence on pre-defined rules and limited computational power.
 Governments and organizations begin questioning AI’s feasibility.
- 
  
  
- 
  
  Lighthill Report in the UK criticizes AI's limited progress, leading to reduced funding, followed by other nations.
- 
  
  AI research focuses on highly specialized tasks like chess and mathematical proofs, but general AI remains elusive. Skepticism grows among researchers and funders.
- 
  
  Expert systems like MYCIN and DENDRAL show promise in specialized areas.
 These systems offer a brief resurgence of interest but fail to adapt beyond narrow use cases.
- 
  
  Governments and businesses begin pulling back from AI investments.
 The U.S. and Europe shift focus to more commercially viable computing technologies.
- 
  
  Commercial Lisp machines fail, leading to further AI skepticism. AI faces increasing skepticism in the tech industry.
- 
  
  Many researchers abandon AI for other fields like software engineering, computer science, and robotics.
 Universities reduce funding for AI programs, and new researchers avoid the field.
- 
  
  Researchers such as Geoffrey Hinton and David Rumelhart refine backpropagation for training neural networks.
- 
  
  Advances in machine learning, neural networks, and increasing computational power rekindle interest in AI.
 New optimism for AI research emerges, leading to a second wave of development in the 1990s.