Relying on machines and decision support systems can pose big ethical problems. And the programs may have a “cold” logic, they are not free from prejudices.
“Feminists should all burn in hell” and “Hitler would have done a better job than the current monkey”, chanted in March 2016 the Microsoft chatbot, Tay, from his first day of immersion on Twitter, Snapchat, Kik and GroupMe, learning in deep learning how and trained by Internet users who had fun making it slip, tweeting artificial intelligence (AI) even ended up denying the existence of the Holocaust.
A pitiful showcase for machine learning, Tay was shut down by its designers after a few hours. But what if we delegated important decisions to AIs and other algorithms?
In truth, banks, insurance companies and human resources departments can already test effective decision support systems for managing wealth, calculating premiums and selecting CVs.
Self-driving cars have roamed the roads of California for years. While the post-baccalaureate admission algorithm (which led to the drawing of lots for certain graduates of the 2017 promotion for a place in college) has not finished making people cringe.
“For a movie or socks, I don’t mind receiving advice from decision support systems, but I find it more embarrassing that they direct my reading to news sites that can condition my opinions, or even be conspirators ”, “And when we rely on algorithms and AI (sophisticated algorithm ‘simulating’ intelligence) to make decisions that have serious consequences in the lives of human beings, this clearly poses ethical problems”, adds he does.
In this regard, the parole of American detainees is an astonishing example. “It has been shown that the probability of being released is much higher if you go before the judge after lunch rather than just before”, informs Serge Abiteboul. Algorithms, free from the “syndrome” of the empty stomach, were tested in parallel.
“Comparing their performance is easy because, in the United States, parole depends on only one parameter: the risk of the person running away or reoffending. ” Result of the match: ” Statistically, the algorithm wins hands down and allows, in this case, blind justice taking into account only objective facts “, comments the researcher. But how far to go? If sophisticated systems allowed other cases to be judged, would you accept a machine’s decision?
It is not a purely philosophical problem. By delegating our decisions to algorithms and artificial intelligence, we are not only losing our human dignity (which is not anything!): These systems also have their flaws.
“ Deep learning, a technique among others in artificial intelligence, is both the one which gives rise to the most spectacular applications and which presents a major drawback: we do not know how to explain the results. These are neural networks that work like black boxes.
With this form of artificial intelligence, in fact, we do not recognize a cat because “it has two ears, four legs, etc. “(Human reasoning made up of rules and dictated by machine), but because it looks like a multitude of other cats whose images have been supplied to the machine to” train “it. As for knowing which resemblances make a tilt for this one, mystery and gumball.
“Now, it would be good to explain the reasons behind the important choices, in order to be able to justify them. And thus guarantee everyone fair treatment, ” recalls Raja Chatila, director of the Institute of Intelligent Systems and Robotics. Could we make these systems more transparent? “There is research on the ‘explainability’ of these black boxes, in particular, funded by DARPA”, replies the researcher. “But neural networks are only numerical calculations: I do not see how we could extract concepts from them”, observes Sébastien Konieczny. However, nobody will accept to be refused a loan or an interesting position because of the connection 42 of the neural network, unfortunately, less than 0.2…
Table of Contents
Machines “Contaminated” By Our Prejudices
Worse still: machines may have a cold logic, they are not immune to prejudice. When it comes to learning, “Tay-the-revisionist” is not the only bad student. In April 2017, the journal Science revealed the catastrophic racist and sexist stereotypes of GloVe, an artificial intelligence “fed” with 840 billion examples drawn from the Web in forty different languages in order to make word associations. “If a system is trained on a mass of data resulting from human speech, it is not surprising that it reproduces the biases”.
Maintain Responsibility On The Human Side
While waiting for decisions and technical solutions to inject them into learning systems, many researchers agree on one point: we must leave the final decision to humans in delicate cases. “Apart from the autonomous car and a few other examples, which require reactions in real-time, the results offered by decision support systems almost all allow time for reflection. We could therefore draw inspiration from the way in which certain decisions are taken, in the military field in particular, via codified protocols involving several actors.
Programming The “Moral” In Machine Language
Classical logic is not of much help in formalizing rules of ethics: too rigid, it is limited to “true” or “false”, and “if this” then “do that” or on the contrary ” do not do it “. “However, in ethics, we often find ourselves stuck between several contradictory injunctions”. For example, you may have to lie to prevent an assassination. Or cross a white line to avoid hitting a pedestrian.
In the event of contradictions or dilemmas, preferences should be induced between several ‘bad’ actions, with weighting coefficients. Its modal operators make it possible to formalize optional possibilities, that is to say, actions authorized but which will not necessarily be carried out. We can also use a probabilistic approach, for example by taking into account the probability that what I can see in the fog is indeed a pedestrian. So the predicates are not either “true” or “false” but “maybe true” or “maybe false” according to a probability distribution.