Artificial Intelligence History
Automation is the story of humanity. In the beginning of time, this meant the use of slaves and animals and the domestication of plants. As our civilization grew, we learned to create better tools that needed less work while being more efficient. Today, computers and machines run our lives; they mine our ore, grow our crops and produce our products. While there are still people working, we need less of us to take care of us.
Artificial Intelligence (AI) is a new kind of automation. While the industrial revolution automated manual labors such as factory jobs, AI is automating our thinking. Whether it is Youtube learning which videos to show you or a deep neural network diagnosing cancer in a patient. AI is here and it’s learning fast.
In this article, we will review the history of AI from its birth to today. In particular we will talk about the famous AI winters, times when AI research was slow and funding non-existent.
The beginning of AI (1950’s)
The idea of an artificial intelligence can be dated back to ancient Greeks with the mythical giant Talos, a fictional bronze automaton built to defend Greece from invaders. Unsurprisingly though, humanity had to wait for the invention of computers before such dreams were possible.
The exact beginning of AI is hard to pinpoint. In the 1950’s, the possibility of thinking machines was considered by Alan Turing. The 50’s also saw the first artificial neural network architecture being developed by Marvin Minsky and the first AI able to play a game: checker. One of the first government funded experiments was in 1954, where scientists automatically translated 60 Russian sentences into English. Finally, in 1956, the Dartmouth Conference formally gave birth to AI.
At this point the hype train was going fast. The idea was that a fully intelligent machine would be developed in a few decades. In 1963 and up until the 1970’s, the MIT received millions of dollars in research grants. In 1964, ELIZA was the first chatterbot, able to carry out simple conversations using pre-recorded responses. In Japan in 1967, they started working on the first humanoid robot. Hopes were high for this new technology.
The AI winters (late 1960s)
In 1966, the ALPAC conducted their infamous report. They realized that machine translation was at such an early stage that it was basically unusable and the field was not improving as much as expected. In 1969, interest in neural networks is lost. In 1973, the Lighthill report criticized the use of AI and its failure to achieve expectations. Finally, in 1974, the SUR project was supposed to deliver a voice command system for american fighter pilots, but the results were underwhelming and the DARPA felt duped. As a result, AI funding was drastically reduced and many careers were lost.
Many more problems lured over AI such as the lack of computing power and the misunderstood complexity of most tasks. People started to realize that humans have an easy time doing extremely difficult things and that common sense was much more important than expected. Reading text is easy for us for example but not for computers.
In the 1980s, AI saw a small comeback. Many so-called expert systems were created and fundamental neural network techniques were developed.
However, in 1987, the LISP machine market collapsed. They were the preferred solution for computing AI but IBM and Apple computers became powerful enough to compete with them. Overnight, a billion dollar market was abandoned, destabilizing the AI landscape. At the same time, governments and investors became afraid that the comeback of AI was just an illusion. Fear and instability lead to the second part of the AI winter. Once again funding was cut but the research continued.
By the late 1990s AI had its reputation tarnished, investors and businessmen were not willing to take another risk. In 1993, over 300 AI companies failed and while research was actually going well, it’s going to take a pretty big marketing move for AI to recover.
The comeback of AI (1997)
1997 was arguably the last year of the winter for AI when a computer beat the world chess champion, it was Deep blue from IBM. Computers in general were getting much more powerful and the internet was growing fast. In 1999, Sony launched an AI robot-pet named Aibo. In 2002, the well-known automatic vacuum cleaner Roomba was mass produced. 2011 Siri. 2014 Alexa. 2017 the world champion of GO lost to a computer. The winter was gone and AI was finally taken seriously again. In particular, 2011 saw a big revolution. Deep and complex artificial neural networks achieving superhuman abilities for some image recognition tasks. For example, Google has developed an AI capable of recognizing breast cancer more accurately than a doctor.
There are many reasons for the recent growth of AI. We produce more data than ever before and computers have become extremely powerful. Moreover, new neural architectures and training schemes have been developed. The internet now also plays a big role in the dissemination of research and data.
The future of AI is now, companies are starting to trust AI again. While we clearly cannot predict when we will be ready to create an AI smarter than us, I believe we will. But this will require us to understand ourselves more.
- ChatGPT and Large Language Models : Key Historical Milestones in AI - 22 August 2024
- Why non linearity matter in Artificial Neural Networks - 18 June 2024
- Is this GPT? Why Detecting AI-Generated Text is a Challenge - 7 May 2024