Artificial Intelligence: Machine, Explain Yourself

Artificial intelligence allegedly gives no insight into what it is learning. It is possible to interpret them – and even urgently needed.

Artificial intelligence works relatively smoothly in some areas, so more and more people rely on their predictions. For example, in the American healthcare system, a commercial AI is in use to identify patients with special needs. They will be included in a program in which they will receive intensive support. The bottom line is that this helps patients, and it lowers costs by choosing only those out of 200 million people who will particularly benefit. The decisive factor is the costs that they already cause.

But because black people claim less medical assistance on average – they have more inadequate access to the health system – they fall through this grid. Artificial intelligence learns to ignore them. Researchers working with Ziad Obermeyer from the University of California have uncovered this technical bias. They assume that if the program were corrected, the proportion of the black population among chronically ill patients who would receive additional services would rise from just under 20 to around 50 percent.

The study says that similar prejudices were found in AI-based facial recognition systems, application processes, and the algorithms behind web searches. Such algorithms, which are used on a large scale, are mostly proprietary. The companies would give no insight into what they let loose on people, write the scientists. That makes it difficult for independent researchers to analyze them. You can only document grievances, but you can’t understand how they come about. Some AI vendors say that it just depends on whether the results are good or not. Only sometimes do you not see this directly in the result. When using AI in sensitive areas, the legislature requires that decisions be explained or interpreted, but it does not define what this means. And in doing so, he gives companies a free hand.

Also Read: The Role Artificial Intelligence Plays In Cybersecurity

To Avoid Mistakes, Researchers Need To Know How Artificial Intelligence Works

With the success of deep learning, machine learning models are becoming more complex and challenging to explain. Such systems, also called artificial neural networks, comprise hundreds of functions and millions of parameters. They are trained with data and find patterns in it, thanks to which they can conclude similar unknown data. The learning processes are considered a black box because such a large number of cycles is not manageable. What happens in the black box is a mystery, say many users. But that’s not entirely true.

“In principle, you can interpret any AI system to a certain extent,” says Christoph Molnar. He researches at the Institute for Statistics at the Ludwig Maximilians University in Munich how machine learning can be made interpretable. How urgent this depends on the application. “An error in a system that gives product recommendations to customers is, of course, less serious than in a system that is supposed to diagnose a tumor or drive a car,” he says. In the latter case, explainability is sometimes vital. The interpretable or explainable AI has therefore become a separate branch of research.

Scientists from the Technical University of Berlin (TU), the Fraunhofer Heinrich Hertz Institute, and the Singapore University of Technology and Design have developed a technique – the Spectral Relevance Analysis (SpRAy) – which allows the decision-making criteria to be visualized, for example, in the form of heat maps. It’s like putting a laser pointer on an AI so that you can see where the AI ​​is looking by looking at the light beam. “What we need for this, however, is an insight into the model – we have to know what the structure of the network looks like and what parameters it uses,” says Klaus-Robert Müller from the TU Berlin. “Our explanation method works so well because we take all of this into account, and only then can we understand how the network decided.”

The researchers found deficiencies in an AI system that won several international competitions for image classification a few years ago. It is based on a statistical model, the so-called Fisher vectors (FV). This FV model and a comparison model, an artificial neural network specially trained for this purpose, were supposed to recognize in the study whether a horse can be seen in a photo or not. The comparison revealed significant discrepancies. The heat map of the network showed the horse and rider in a reddish color – so he examined the content of the image. In contrast, the heat map of the award-winning FV model made the lower-left corner of the image appear warm. There is a source on many of the pictures, for example, the name of a horse photo archive.

As a result, the AI ​​did not identify the horse but learned to recognize it based on the sources. In a figurative sense, it simply reads off – like from a cheat sheet. This strategy is a clear case of “Clever Hans” behavior, as Müller calls it. The clever or clever Hans was a horse at the beginning of the 20th century. One teacher claimed to have taught the horse to do arithmetic, and in fact, it was correctly answering tasks, such as tapping or nodding with a hoof. However, scientists found out that the horse could only interpret the most delicate nuances in the teacher’s gestures and facial expressions. So he inadvertently gave away the solution.

The researchers around Müller were able to confirm their results with the FV network by removing the sources from the photo and inserting them into a picture with a sports car – the AI ​​now recognized a horse in the car. “Such AI systems are completely useless in practice and are extremely dangerous for safety-critical areas,” warns Klaus-Robert Müller. You don’t know what an AI can do in everyday life with such tricky solutions. “It is conceivable that up to half of the AI ​​systems currently in use implicitly or explicitly rely on such strategies.”

Müller and his colleagues aren’t the only ones researching explainable AI – the field is increasing. There are now several promising approaches, says Christoph Molnar. For example, Shapley Additive Explanations (SHAP): This technique tries to identify which parts of the input data a trained model relies on most. For example, researchers can remove the eyes or noses from the image data as a test during face recognition to determine whether the AI ​​can still recognize people in photos.

Also Read: Can We Trust Artificial Intelligence?

Is It Enough For Experts To Understand What A System Is Doing? Or Does The User Have To Be Able To Understand That?

Sebastian Palacio from the German Research Center for Artificial Intelligence in Kaiserslautern is working on AI models designed from the outset to be easy to interpret. Ultimately, such models are a return to expert systems. At the moment, typical AI models, especially neural networks, work so that they do not need to know what constitutes a tumor on an image – they learn it from examples. If you involve experts and give the AI ​​some characteristics, your decisions are easier to interpret. This has the disadvantage that the AI ​​is less able to find something that experts have previously missed. Therefore, the previous models are justified, even if their explainability has limits.

One problem is that there is no basic definition of explainability – neither by law nor in science, says Palacio. The EU General Data Protection Regulation, for example, states that decision-making processes must be able to be explained using an automated procedure. But there is no precise definition of what that is supposed to mean. Is it enough for experts to understand what a system is doing? Or does the user have to be able to understand that?

Last but not least, an AI can also provide insights into decisions with examples. If, for example, an AI is supposed to recognize image content and designate a cat as a house, it can show models of houses or cats during troubleshooting – this can show how the error came about, for example, because cats can be seen in front of homes in several images. But would that pass as explainability? As long as this is not clarified, companies will have little incentive to make their AI interpretable. And then an allegedly reliable AI can sometimes only recognize skin cancer if the doctor has marked it with an arrow on a picture – this has also happened before.

TechSmashers
Tech Smashers is a global platform that provides the latest reviews & news updates on Technology, Business Ideas, Gadgets, Digital Marketing, Mobiles, Updates On Social Media and many more up coming Trends.

RECENT POSTS

Streamlining Financial Processes: The Benefits of Modern Accounting Software

In the fast-paced environment of modern business, it is essential to efficiently handle finances. It is key to ensure the prosperity and development of...

Top 5 Best Portable Consoles In 2024

The most recent age compact control center is intended to offer a functional and complete gaming experience with perpetually noteworthy execution. Versatile game control...

How Modern Smartphones Have Revolutionized Journalism

The world has gone entirely digital; everything is now accessible online, from products and services to information. The introduction of technological innovations, such as...

The CIA Did Not Break The Encryption Of WhatsApp, Signal, Or Telegram

If encrypted messaging applications do not appear to be compromised by the CIA, the agency is using numerous techniques to take control of mobile...

Leveraging Customer Opinions to Boost Online Engagement

In the dynamic landscape of digital commerce and information exchange, the power of customer opinions has never been more influential. Today's savvy businesses are...

WiFi: 5 Constraints To Manage When Deploying A Network

The constraints on a WiFi deployment project are incredibly numerous. A necessary phase for any project is to define the need to size the...

How To Install Windows 11/10 On Your Mac With UTM

If you use a Mac equipped with an Apple Silicon (M1, M1 Max, or M2) or Intel (x86/64) processor, you will be delighted to...

The Role Of HR Management In The Digital Transformation Paths Of Organizations

Starting and managing a Digital Transformation path in the company does not only mean equipping yourself with innovative tools and methodologies but also acting...