Definition Of Artificial Intelligence
Let’s start immediately by removing all hope from those looking for a univocal and shared definition of the term “artificial intelligence” because it is a concept that includes a very large number of topics that refer to different disciplines, from neurology to computer science, from neurobiology to neurophysiology (and in genre all disciplines that study the human brain) to mathematics and so on.
Therefore, the more one tries to give it an all-encompassing scientific definition, the more one is forced to simplify it (in order not to neglect fundamental aspects depending on the point of view taken into consideration), and a banal definition remains the discipline that studies design, development and implementation of systems capable of simulating human skills, reasoning and behaviour.
Then, to try to understand the fields of action, it is necessary to refer to the history of what only in more recent years is considered a real scientific discipline, when it first appeared and where it comes from.
When Was Artificial Intelligence Born And How It Evolved
In this article we explain you the history of AI, we summarise in a schematic way the main evolutionary junctions and, since we are talking about systems that simulate the behaviour of the human being, we do so using human life as a metaphor for the story.
The Birth Of Artificial Intelligence
- The first sparks that will lead to the birth of this discipline are those lit in the century that revolutionised science and started modern science, the 1600s (although conventionally the year of the beginning of the scientific revolution is considered 1543 with the publication. It was in the 17th century that the first machines capable of performing automatic calculations were built (Blaise Pascal, Gottfried Wilhelm von Leibniz).
- Charles Babbage with his “analytical engine” which anticipated the characteristics of modern computers in the first half of the nineteenth century and of course the work of Alan Turing , considered the father of computer science, starting from the second half of the 1930s represent the amniotic fluid in which the gestation of artificial intelligence continues.
- However, it is in 1943, with the work of the neurophysiologist Warren Sturgis McCulloch and the mathematician Walter Harry Pitts, that this gestation draws to a close: first of all the two scientists, based on neurophysiological observation, theorise that the signals between two cells are characterized from an exclusive type of behaviour where the transmission of the neuro impulse can only be complete or null (on / off); thus assimilating the neuron to a binary system, McCulloch and Pitts show, with a mathematical model, how simple neurons can be combined to calculate the three elementary logical operations NOT, AND, OR. It is from these assumptions that artificial neural networks will be born and that science comes to give birth to artificial intelligence.
- The term “artificial intelligence” has a precise date of birth: it was used for the first time by mathematicians and computer scientists John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon in a 1955 informal document to propose the Dartmouth conference to be held there. ‘next year and will be considered the real “delivery room” of artificial intelligence: in fact, the first program explicitly designed to imitate the problem-solving skills of human beings is presented here.
The Evolution Of Artificial Intelligence
The period from 1950 to 1965 is one of the great expectations and, like watching a child start talking or walking, the first activities around the newborn discipline are exciting.
- In 1950 Turing proposed his famous “ imitation game ” in the equally renowned article Computing machinery and intelligence, in an attempt to answer the fundamental question: can machines think?
- In 1958 the psychologist Franck Rosenblatt proposed the first neural network scheme, called Perceptron (perceptron) , designed for the recognition and classification of shapes and which consists of an entity with an input, an output and a learning rule based on the error minimisation.
- In 1958 McCarthy developed the Lisp language to study the computability of recursive functions on symbolic expressions which for a long time was the reference language in artificial intelligence projects (it was the first language to adopt the concepts of the virtual machine and virtual memory management).
- McCarthy also describes an ideal program, Advice Taker, designed to find solutions to problems of a not strictly mathematical type.