IA Timeline

  • Alan Turing

    published "Computing Machinery and Intelligence," introducing the Turing test and opening the doors to what would be known as AI.
  • Marvin Minsky and Dean Edmonds

    developed the first artificial neural network (ANN) called SNARC using 3,000 vacuum tubes to simulate a network of 40 neurons.
  • Arthur Samuel

    developed Samuel Checkers-Playing Program, the world's first program to play games that was self-learning.
  • John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon

    coined the term artificial intelligence in a proposal for a workshop widely recognized as a founding event in the AI field.
  • Frank Rosenblatt and Jhon McCarthy

    Frank Rosenblatt developed the perceptron, an early ANN that could learn from data and became the foundation for modern neural networks. Jhon McCarthy developed the programming language Lisp, which was quickly adopted by the AI industry and gained enormous popularity among developers.
  • Arthur Samuel and Oliver Selfridge

    Arthur Samuel coined the term machine learning in a seminal paper explaining that the computer could be programmed to outplay its programmer. Oliver Selfridge published "Pandemonium: A Paradigm for Learning," a landmark contribution to machine learning that described a model that could adaptively improve itself to find patterns in events.
  • Daniel Bobrow

    developed STUDENT, an early natural language processing (NLP) program designed to solve algebra word problems, while he was a doctoral candidate at MIT.
  • Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg and Carl Djerassi

    developed the first expert system, Dendral, which assisted organic chemists in identifying unknown organic molecules.
  • Joseph Weizenbaum and Stanford

    Joseph Weizenbaum created Eliza, one of the more celebrated computer programs of all time, capable of engaging in conversations with humans and making them believe the software had humanlike emotions. Stanford Research Institute developed Shakey, the world's first mobile intelligent robot that combined AI, computer vision, navigation and NLP. It's the grandfather of self-driving cars and drones.
  • Terry Winograd

    created SHRDLU, the first multimodal AI that could manipulate and reason out a world of blocks according to instructions from a user.
  • Arthur Bryson and Yu-Chi Ho, Marvin Minsky and Seymour Papert

    Arthur Bryson and Yu-Chi Ho described a backpropagation learning algorithm to enable multilayer ANNs, an advancement over the perceptron and a foundation for deep learning. Marvin Minsky and Seymour Papert published the book Perceptrons, which described the limitations of simple neural networks and caused neural network research to decline and symbolic AI research to thrive.
  • James Lighthill

    released the report "Artificial Intelligence: A General Survey," which caused the British government to significantly reduce support for AI research.
  • 1980

    Symbolics Lisp machines were commercialized, signaling an AI renaissance. Years later, the Lisp machine market collapsed.
  • Danny Hillis

    designed parallel computers for AI and other computational tasks, an architecture similar to modern GPUs.
  • Marvin Minsky and Roger Schank

    coined the term AI winter at a meeting of the Association for the Advancement of Artificial Intelligence, warning the business community that AI hype would lead to disappointment and the collapse of the industry, which happened three years later.
  • Judea Pearl

    introduced Bayesian networks causal analysis, which provides statistical techniques for representing uncertainty in computers.
  • Peter Brown

    et al. published "A Statistical Approach to Language Translation," paving the way for one of the more widely studied machine translation methods.
  • Yann LeCun, Yoshua Bengio and Patrick Haffner

    demonstrated how convolutional neural networks (CNNs) can be used to recognize handwritten characters, showing that neural networks could be applied to real-world problems.
  • Sepp Hochreiter and Jürgen Schmidhuber

    proposed the Long Short-Term Memory recurrent neural network, which could process entire sequences of data such as speech or video.
  • University of Montreal

    researchers published "A Neural Probabilistic Language Model," which suggested a method to model language using feedforward neural networks.
  • Fei-Fei Li started

    Fei-Fei Li started working on the ImageNet visual database, introduced in 2009, which became a catalyst for the AI boom and the basis of an annual competition for image recognition algorithms.
  • Rajat Raina, Anand Madhavan and Andrew

    Ng published "Large-Scale Deep Unsupervised Learning Using Graphics Processors," presenting the idea of using GPUs to train large neural networks.
  • Jürgen Schmidhuber, Dan Claudiu Cireșan, Ueli Meier and Jonathan Masci

    developed the first CNN to achieve "superhuman" performance by winning the German Traffic Sign Recognition competition.
  • Geoffrey Hinton, Ilya Sutskever and Alex Krizhevsky

    ntroduced a deep CNN architecture that won the ImageNet challenge and triggered the explosion of deep learning research and implementation.
  • 2013

    China's Tianhe-2 doubled the world's top supercomputing speed at 33.86 petaflops, retaining the title of the world's fastest system for the third consecutive time.
  • Ian Goodfellow and colleagues

    Ian Goodfellow and colleagues invented generative adversarial networks, a class of machine learning frameworks used to generate photos, transform images and create deepfakes. Diederik Kingma and Max Welling introduced variational autoencoders to generate images, videos and text. Facebook developed the deep learning facial recognition system DeepFace, which identifies human faces in digital images with near-human accuracy.
  • DeepMind's AlphaGo

    defeated top Go player Lee Sedol in Seoul, South Korea, drawing comparisons to the Kasparov chess match with Deep Blue nearly 20 years earlier. Uber started a self-driving car pilot program in Pittsburgh for a select group of users
  • Stanford researchers

    ublished work on diffusion models in the paper "Deep Unsupervised Learning Using Nonequilibrium Thermodynamics." The technique provides a way to reverse-engineer the process of adding noise to a final image.
    Google researchers developed the concept of transformers in the seminal paper "Attention Is All You Need," inspiring subsequent research into tools that could automatically parse unlabeled text into large language models (LLMs).
  • 2018

    Developed by IBM, Airbus and the German Aerospace Center DLR, Cimon was the first robot sent into space to assist astronauts. OpenAI released GPT (Generative Pre-trained Transformer), paving the way for subsequent LLMs. Groove X unveiled a home mini-robot called Lovot that could sense and affect mood changes in humans.
  • 2019

    Microsoft launched the Turing Natural Language Generation generative language model with 17 billion parameters. Google AI and Langone Medical Center's deep learning algorithm outperformed radiologists in detecting potential lung cancers.