Artificial intelligence is a megatrend that will significantly impact value creation. For it to be able to develop its potential, however, questions of transparency, traceability, and security must also be answered. It is crucial to define what works under which conditions – or not. The AI controller plays a central role in an open interaction between man and machine.
Digitization has long been affecting society, which has almost wholly penetrated private and working life. If in this context, reference is made to the sudden changes associated with digitization, one also speaks of megatrends. In connection with digitization, the megatrend of artificial intelligence (AI) is of central importance. It is assumed that it will massively support national economies’ value creation shortly.
The problems of the German middle class, eg:
- Shortage of apprenticeships and skilled workers,
- Increase in international competition,
- Fluctuating commodity prices,
- Endangered intellectual properties,
- Industrial espionage,
- Protection of IT against increasing hacker attacks
According to expert opinion, this can be successfully tackled with AI and new business models. They, therefore, play a vital role in the performance of economies in the 21st century.
Current surveys assume that AI will add value to the German economy by around 40 billion euros within five years. The term AI is not easy to grasp, partly because there is no generally accepted understanding of the term intelligence. In general, AI is the ability of an artificial system or machine to imitate or simulate parts of intelligent human behavior. AI, a sub-area of computer science, deals with the systematization and research processes for intelligence.
AI modules can have different levels of maturity:
- Simple modules from weak AI work with purely supporting machine functions.
- Automated AI can automatically carry out repetitive activities and, if necessary, codify them.
- Robust AI modules can work autonomously and goal-oriented.
It is widely believed that we are currently in the transition from phase 1 to phase 2. Phase 3, on the other hand, is only expected in a few decades. Humans are increasingly being given control and monitoring functions over data, AI, methods, and social implications. In many cases, their objects and scope still await precise clarification.
Risks Of Using AI
Transparency, traceability, and security are always central values for any product, service, or machine. Exactly these aspects are not always understandable or given with AI-based approaches:
- Using their input, algorithms can, for example, update discrimination against population groups.
- When using AI systems in vehicles, dubious decisions can be made: Does it make more sense to save occupants at all costs, or should the number of victims be minimized based on criteria?
- AI systems could completely replace human workers.
Since the social debate on these aspects is gaining momentum, it is also desirable to participate in a creative and balanced way from an entrepreneurial point of view.
Conditions For The Use Of AI
Due to the ever more rapidly developing possibilities, the focus is on the question of which conditions occur or have to be created so that control functions can be exercised over AI in the sense of a positive contribution to society as a whole by humans. For example, the following aspects must be taken into account:
Traceability: One aspect concerns the promotion of traceable criteria in the development and use of AI, on which more focus needs to be placed. As early as 2014, Bostrom and Yudkowsky therefore called for AI working methods and actions to be fundamentally comprehensible.
No black box approaches that can no longer be used to understand the model outputs must be avoided.
Correction options: If errors occur, they must be presented to the human operators/users promptly in an understandable manner and operationalized with the possibilities of correction; this also includes the demand for essential protection against manipulation of the AI algorithms.
Data steward/AI controller: New professional fields must be established, such as ensuring that relevant data is provided or collected for the AI in the respective areas of companies and society. The competencies and responsibilities of the AI controller and the AI must be clearly defined. These job profiles include aspects from the field of data science since certain decisions cannot be made without the necessary specialist knowledge, as well as aspects of compliance and data protection: Should specific AI processes be used at all, and if so, under what conditions? (Keyword: biometric ID procedures such as automated face recognition).
UX4AI: Developing desirable user interfaces that enable more efficient and comprehensive collaboration between AI and human users . The main point here is that the classic graphical user interface (GUI), which we have been accustomed to interacting with software components for decades, is replaced by more modern interfaces that enable faster interactions with AI software. Here, a lot of development is done in Conversational User Interfaces (CUI). Motivated by the now speedy and correct speech recognition by Natural Language Processing (NLP), development is taking place here, as is already known, in the area of voice assistants for mobile telephony.
3-Rules: In conjunction with the options just mentioned for using CUI with AI, rules for the fundamental behavior of these AI modules in the new human-machine interaction must be defined. Here, analogous to Asimov’s laws on robotics, a 3-rules proposal is made that could be included in the certification of ethical AI in the future. The respective AI should have a CUI-based language module that interactively integrates the user and the AI controller into their decisions. The following rules could be used here:
- Show AI model: When asked, the AI must be able to explain via a CUI what it was created for and what it can and cannot do.
- AI quality assurance: When asked, the AI must be able to explain via a CUI why AI support makes sense at this point. If results are reported, a confidence interval must be specified; if desired, external sources must be referenced for this purpose, which the AI controller can understand.
- AI clarity: If there are any ambiguities in the results achieved by the AI, the AI must be able to explain via a CUI what kind of ambiguity it is and actively ask the AI controller what should be done. If the AI’s target area was not hit during its deployment, the AI must actively inform the controller that its area of competence has been exceeded.
Outlook On Trust, Security, And Transparency In The Current AI Development
The field of AI development is currently being updated at such a speed that trust approaches for a secure, understandable, and transparent embedding of the methods in companies and society are increasingly coming into focus. For this to be implemented and the potential of the AI to be successfully developed, the boundary conditions of the AI systems should always be clear: What works under which conditions and what doesn’t? For the user/AI controller to understand this as easily and quickly as possible, CUI should be used according to specific rules. This approach is intended to ensure that nothing happens outside the target area, i.e., the original purpose of the AI modules, without the consent of the user/AI controller. This would establish a simple process that shifts the lack of transparency of AI processes, as it is often criticized, into an open interaction between man and machine. After the newly established field of data science, another profession would also be pushed for the future: that of the AI controller.