
The history of artificial intelligence goes back over 80 years and, to understand how we got here, we went back in time and made a chronology of seven technologies that brought us where we are today.
The Turing Machine (1936)
It can be considered as the pioneer of artificial intelligence. Created by British mathematician and scientist Alan Turing, it was a machine that in the 1930s was already capable of running algorithms and solving computing problems even before the Internet was a reality. It was still not possible to create social networks, but it was possible for Humanity to solve important problems, for example, world wars. As demonstrated in the movie “The Imitation Game”, it was this technology that led Turing to have a key role in World War II and to be part of the team that developed a solution capable of deciphering the Enigma, the machine that the Nazis used to send encrypted messages.
The Logic Theorist (1955)
Created a year before the term “artificial intelligence” was officially coined at a conference at the University of Dartmouth, in the USA, this is considered the first AI program. It was developed by a Nobel laureate in Economics – Herbert A. Simon -, an expert in Psychology and Cognition – Allen Newell – and a programmer from the RAND Corporation named Cliff Shaw. The basis of this “software” was to automate human logic to prove mathematical theorems. With it, this team managed to prove 38 of the 52 theorems of Principia Mathematica, a popular work with the fundamentals of Mathematics, even finding simpler ways to demonstrate them. This would be Newell and Simon’s first collaboration, and they went on to form an enduring partnership that spawned some of the most important early applications of artificial intelligence.
ELIZA (1966)
It was one of the first chatbots to create the illusion of a conversation, almost 60 years before ChatGPT. Created by Joseph Weizenbaum, it used a more rudimentary natural language processing program that allowed running some conversation scripts prepared for a given context. The script that gained the most popularity was DOCTOR, which simulated a conversation with a psychologist, using part of the “patients’” inputs to ask them questions that gave an idea of communication, but not of understanding (although initial users thought otherwise). ELIZA did not learn anything from the interaction and any change in her responses had to be edited in the program code itself, considered one of the most important in the history of computing.
MyCIN (1976)
Returning to the topic of health, this was one of the first computer systems to use artificial intelligence to simulate the knowledge and decision of a human specialist. In its development, Edward Shortliffe, used a relatively “simple” process: first he defined 600 basic health rules; then he developed a series of Yes/No questions that had to be answered; finally, he inserted them into a program that, based on the combination of responses, diagnosed the bacteria behind the infections in order of probability, to which he added not only a diagnostic rationale, but also a treatment proposal associated with the patient’s body type . MyCin ended up never being used in practice, but that wasn’t because it didn’t give reliable results. At the time, not only were there still some ethical doubts about the use of computers in Medicine, but there was still no technological infrastructure to facilitate their integration. Today, as we know, the case is quite different.
Deep Blue (1997)
One of the first victories of Artificial Intelligence over Man. Deep Blue was a program developed on an IBM supercomputer, capable of playing games with the main chess masters. In 1996, there was the first media meeting with one of the best players ever, Garry Kasparov. In a best-of-six series, Deep Blue lost, but became the first AI to win a game against a world champion using customary time control. The following year, there was a rematch and, after a technological “upgrade”, Deep Blue even became the first program to defeat a chess master. The Deep Blue vs Kasparov confrontation has already been the subject of several books and documentarieswhich analyze in detail the development of technology and some controversies, among them, the question of whether or not some moves had been made by the machine.
Watson (2011)
14 years after Deep Blue, this was another IBM project to gain prominence. Watson was able to process and respond to questions posed in natural language without needing to follow a pre-defined script as we saw with ELIZA a few decades earlier. One of his main case studies was winning a competition on “Jeopardy!”, the popular American quiz show, which instead of the question-answer model, works based on clues that the contestants have to associate with a general theme. The technology ended up being applied by IBM in various industries, from healthcare to digital marketing, and the Watson brand became an important part of the company’s B2B offering. Just recently, IBM announced its new AI platform, the WatsonXwhich will facilitate the development of generative AI models for your customers.
AlphaGo (2016)
From the beginning of the 2010s, new developments began to emerge, namely the research carried out by Geoff Hinton (“the Godfather of AI”) in the field of Deep Learning. The aim was to introduce elements of machine learning where the algorithm learns tasks by identifying patterns in the data and inferring actions from there, rather than programming them. It was part of this rationale that DeepMind (in the meantime bought by Google) used to develop AlphaGo, a program to rival the best players of Go, a popular Chinese strategy game. Simply put, AlphaGo instead of having all the Go rules in its code and applying them to different contexts, uses an algorithm that, with machine learning, can identify patterns in the game and learn the rules itself without having to program it. Currently, DeepMind has already developed MuZero, a general program that can learn games without having to know their rules.