The artificial intelligence current market, from what we read in the data presented at the beginning of 2020 by the researchers of the Artificial Intelligence 2020 Observatory, is still taking its first steps.
Going into a little more detail, we see that Intelligent Data Processing initiatives constitute the main type of AI projects, collecting 33% of the total expenditure. Natural Language Processing and Chatbot / Virtual Assistant development projects follow which, taken together, represent 28% of the market. As regards the sectors, four have emerged which, more than others,
What we are experiencing, since the 1980s, is a period of great ferment on the topics of artificial intelligence and we can compare it to that of a human being in the prime of his youth, full of energy and desire to experiment, but also with the adequate ability to analyse what has been learned, evaluate its potential and possible developments. And here our metaphor ends because the next step that of maturity is still on the horizon. But here, the key junctions of the last 40 years divided by strands.
Neural Networks
We will analyse this topic in detail later when we talk about deep learning, but we give here some basic historical notions.
We saw in the previous paragraph that the first theorisation of artificial neural networks (the Rosenblatt perceptron) had been questioned, but in 1986, Jay McClelland and David Rumelhart published Parallel distributed processing.
Explorations in the microstructure of cognition laying the foundations for connectionism. And giving new vigour to studies in this field: a neural network is a direct non-linear graph, in which each processing element (each node of the system) receives signals from other nodes and in turn emits a signal to other nodes.
Practically, while in a computer the knowledge is concentrated in a precise place, the memory, in the neural network knowledge is not localisable, it is distributed in the connections of the network itself allowing the system to learn from its own experiences.
The multilayer neural network model (MLP-Multilayer Perceptron) is developed where each layer of nodes is completely connected with the next one and uses a supervised learning technique called error back-propagation (which we will explain later in the deep learning chapter) for the training of the net that starts from an already known result for a specific champion.
Nanotechnologies
The evolution of nanotechnologies, i.e. the ever-increasing miniaturization of microprocessors, has led to the development of a new generation of components (from General Purpose GPUs to ResistiveRAMs to anthropomorphic chips) that have given new impetus to artificial intelligence thanks to the enormous power of calculation made available. We will see them in detail later, in the chapter The technologies that enable and support artificial intelligence.
Innovative Algorithms
If the previous strands represent (with a final parallelism with our metaphor) the physicality of artificial intelligence, the algorithms should represent what comes closest to thought: that “magical” action that makes man a very special being compared to to other animals.
It is precisely from the evolution of innovative methodologies and algorithms that artificial intelligence draws its lifeblood and it is on these that we will focus in the parts of this article dedicated to machine learning , deep learning , natural language processing and augmented reality and virtual reality.
Cognitive Computing
Using self-learning algorithms, data mining and big data analytics, pattern recognition, natural language processing, signal processing (a signal is a temporal variation of the physical state of a system or of a physical quantity that is used to represent and transmit messages or information at a distance, so the analysis of signals is a component that supports cognitive computing) and by implementing the most advanced hardware technologies, technological platforms are created that try to imitate the human brain, starting from simpler activities to arrive at increasingly complex processing.
From the historic IBM Watson, the first commercial supercomputer of this type, to Google Deepmind and Baidu Minwa there are now several examples available today.