Talking about Artificial Intelligence is now on the agenda. The application areas are the most varied, ranging from use to support companies and industries in general, then retail trade, optimization of warehouses, inventories, supplies, logistics, up to more superficial uses, as in the case of deep fakes. The AI models that are developed have the characteristic of being as “simple and intuitive” at a conceptual level as they are complex in their implementation, which is why the results are very often accepted by users in a poke, without asking for an explanation of how the model arrives at specific results.
In these conditions, however, it is not so easy to spread the culture of data: the decisions made by a model for which the user does not have visibility of the internal functions risk not being taken seriously and being considered unreliable. This is one of the most significant difficulties encountered in AI projects and is the reason why we talk more and more often about eXplainable AI (XAI), the “explainable” artificial intelligence.
What Is Meant By Explainable AI?
Explainable AI (XAI) is a set of methods and processes that allow users to confidently understand and use the results calculated by machine learning algorithms. The purpose of the XAI is to provide the user with an understanding, albeit at a high level, of what happens within the model to make the user participate in the transformations that the data undergoes and in the analyzes they are subject to.
Finally, to allow them to develop greater trust in their data and solutions based on AI. The qualities that distinguish eXplainable AI are mainly three: transparency, interpretability, and explicability. We speak of transparency when the analytical training processes of the model can be described in a univocal and transparent way and when the choice of a specific algorithm or parameter can be justified without ambiguity. Interpretability and explicability are strictly related concepts that describe different moments in setting and offering to the user.
Interpretability is linked, in fact, to presenting the model and the calculated result to the user so that it is understandable for humans. Explicability represents the next step: the set of those characteristics of the application domain that make the model interpretable within a specific context and, therefore, contribute to the creation of added value as a result of the decision taken in a given way. Driven.
How XAI Works
Technically speaking, each model reads input data, potentially coming from the most diverse sources, processes them with even very complex and advanced statistical and analytical techniques, and, finally, produces objective results. The strength of XAI lies in making these analytical steps “explainable,” providing a logical explanation of how the machine learning model arrived at that specific result and therefore relating the input data to the output data. Making this “explainability” concrete without trivializing the technological and computational complexity of the models is a difficult step. It requires specific skills and tools to support both the Data Scientists who will develop the model and the users who will use the results and who will have to understand it.
In XAI, many methods are used ( for example, LIME and SHAPE ), but without going into the strictly technical merits, we can talk about two different approaches in terms of design. Once the development of the analytical model of AI has been completed, the first technique provides the creation of a “simplified” version of it, with the same data transformation characteristics but with a much higher degree of approximation. On the one hand, this invalidates the accuracy and reliability validated on the original model for this version, but it makes it much easier to explain to the user. This simplified version of the model, for the reasons above, will never be used in the fundamental analysis and decision-making process.
It is simply a more straightforward and more understandable representation of it, therefore explaining the analytical and decision-making process to the non-technical end-user. An alternative approach is one more focused on the interpretability of the AI model or on making it understandable to the user, without intermediate steps. To do this, you can use specific tools that help make it “modular” already at the time of setting up the process: you can thus take advantage of the black box mechanism by placing it on individual activities and process analysis, rather than leaving it at the same level, of the entire project.
Each step is represented by its black box, so the user can follow the analytical flow in each step by observing the boxes from above, thus understanding the purpose of each step without having to go into the technical merits of how each operation is physically done. Therefore, a simplified view of the analytical flow is obtained simply by observing it from a higher point of view. For this approach, Bova has chosen the Dataiku platform, a graphical and potent Data Science tool that, thanks to the intuitive and modular graphic interface, allows the development of even very complex flows and is understandable by non-technical users.
Explaining a model to a Data Scientist will be much easier than explaining it to a business user because each subject has specific skills and different interests. To simplify the matter, it is an excellent suggestion to use adequate support tools that support each phase of the project for the various business functions. The graphic interfaces often make even very complex reasoning intuitive, thanks to the visual components that characterize the different operations.
Possible Future Developments
Thanks to eXplainable AI, what until recently were considered the future developments of Machine Learning for AI are becoming a reality: we are talking about self-driving cars, chatbots, and all the solutions created to support human activities subject to critical errors (medical diagnostics, military technologies, investments, financing, etc.) The developments in this direction are, in fact, still very focused on the central role of man and on the help and support that AI can provide him to optimize his activities and decisions.
Many ethical and emotional aspects influence the final decisions, which cannot be modeled mathematically. So XAI will probably not replace man entirely. Still, over time it will undoubtedly be of great support to observe the world objectively and make decisions, even if influenced by his feelings, in a more aware way than the reality of things.