Artificial Intelligence Needs Ethical Guard Rails

Artificial intelligence has not yet been brought to moral conscience. As well as? So far, the general guidelines that need to be implemented in the algorithms have been missing. But something is happening.

Artificial intelligence has experienced an enormous boom in recent years and has developed into the future-oriented technology of our society within a short time. The European Commission recently presented a draft law that primarily focuses on regulating AI applications that are associated with risks for humans.

The Federal Government also wants to use the “Artificial Intelligence Strategy” to make Germany and Europe a leading location for AI and, among other things, to promote the many possible uses of the technology in all areas of society. According to a Gartner survey, 24 percent of the companies surveyed increased their investments in AI even during the pandemic, and 75 percent are planning further or new AI initiatives when they enter the recovery phase after the pandemic.

Categorize The Risk Potential Of AI

Before using artificial intelligence, however, companies should hold a detailed discussion about the advantages and disadvantages of the technology from an ethical and moral perspective. There are still legitimate concerns about the technology. For example, there are fears that the data basis could reflect human prejudices and result in racially biased software or anti-Semitic chatbots. CTOs must be equipped with the necessary understanding and knowledge to recognize such AI use cases.

With the new law, the EU Commission would like to introduce rules for artificial intelligence’s safe and trustworthy use. Therefore, according to Vice President Margrethe Vestager, it is not about AI itself but about its applications, which have a high-risk potential. The risk is categorized accordingly; Most AI systems fall into the “small” or “no” risk category. These include, for example, video games or filters for spam messages.

Other use cases are selection processes in application processes for universities or jobs, systems for assessing creditworthiness, or self-driving cars, which are considered high-risk applications by the draft law. Despite the proposed ban on AI in generalized surveillance of the population, there are still voices of warning because the explicit prohibition of facial recognition in public spaces is not taken into account and harbors the risk of mass surveillance using biometric AI algorithms.

Equal Opportunities In Lending & The Application Process

The use of AI is already part of everyday life. Loans are granted, for example, based on an AI algorithm, which only works as well as it was programmed. The algorithm can select areas based on ethnic characteristics or offer different services depending on the residential area depending on the data basis. For example, a resident of a socially deprived area may not receive benefits or only be of lower quality. In contrast, the same services are offered at a higher price to residents of the affluent areas.

The more well-known case involves prejudice when it comes to recruiting and hiring people. More and more companies are relying on algorithmic decision-making systems at all levels of the HR process. As of 2016, 72 percent of applicants’ CVs have been sorted out by computers. As a result, job applicants have less and less to do with people and, depending on their résumé and algorithm, fall through the cracks.

The majority of companies try to avoid such incidents. Analytics platforms try to avoid using specific indicators that could give rise to discrimination based on gender, age, or race. One example is the LinkedIn platform, which uses systems to collect and use gender-specific information in LinkedIn profiles in a non-targeted manner to classify and correct a possible disadvantage or preference. Google is also rethinking after Google AdWords was accused of sexism. This was because researchers found that male job seekers tended to see more ads for high-paying executive positions than women.

Companies Provide Guidelines

Technology companies have recognized the problem and are increasingly concerned with the ethical use of data. A well-known example is Microsoft, which has set six principles: non-discrimination, reliability, data protection, accessibility, transparency, and accountability. An excellent example for any leader in technology.

Not to be neglected is a holistic approach to be able to act without judgment. The AI ​​is only as good as the model data, so this data must be fair and representative of all people and cultures. First and foremost, companies need to ask themselves whether the AI ​​use they strive for is moral, safe, and correct. Is the data basis based on an algorithmic bias? Is there an internal review of ethical guidelines, and what the potential impact on customers is? What are the control mechanisms like, and does it make sense to bring external consultants on board specially trained in AI solutions?

The introduction of guidelines at the European level is, therefore, an essential first step. However, companies should also ask themselves to what extent they have a legal and moral obligation to use AI by ethical principles. Every company should first and foremost be the equal and non-discriminatory use of AI.

The Universal Declaration Of Human Rights As A Protective Wall

Technology leaders are increasingly concerned with the ethical use of data. Companies like Bosch or SAP have given themselves guidelines and developed an AI code, which, among other things, addresses the question of how artificial and human intelligence complement each other. Above these guidelines, there is always the criterion that AI products or their use do not conflict with the articles of the “Universal Declaration of Human Rights.”

This also includes the aspect of freedom from prejudice: Companies must support a holistic approach to AI technology to act without judgment. An AI is only as good as the data available to it. In any case, these should be fair and representative of all people and cultures.

Therefore, CTOs should ask themselves whether the AI ​​use they strive for is moral, safe, and correct, i.e., necessary.

  • Is the data behind your AI technology resilient, or does it have an algorithmic bias?
  • Are the AI ​​algorithms checked to ensure they are set correctly to achieve expected results with predefined test sets?
  • Is it transparent about GDPR how the AI ​​technology affects the company internally and externally on customers and partners?
  • Is there an exceptional AI control and advisory committee made up of cross-functional managers and external consultants who set up and monitor the control of AI-based solutions?

Ultimately, companies have a legal and moral obligation to use AI ethically – but it’s also a business imperative. No CTO wants to be known for inefficient, discriminatory and unethical use of AI.

TechSmashers
Tech Smashers is a global platform that provides the latest reviews & news updates on Technology, Business Ideas, Gadgets, Digital Marketing, Mobiles, Updates On Social Media and many more up coming Trends.

RECENT POSTS