The Ethics of Artificial Intelligence: Part 2 of 2: an in-depth plea for an ethical basis in the field of AI by our Qymatix guest author David Wolf.

If you want to read Part 1 first, click here.

Ethics in Conversational AI: What should a bot do, and what should it not do?

Artificial intelligence is like other things in life: just because you can do something doesn’t mean you should. One example of this is conversational AI and, in particular, chatbots. Chatbots can be helpful in marketing, service and support. For example, these digital helpers automate a website’s interaction with visitors to create an optimal user experience and quickly and efficiently handle customer inquiries even when there are no longer any flesh-and-blood employees sitting in front of the computer. But: Who pays attention to ethical aspects when developing chatbots?

People like Maggie Jabczynski, Conversational Designer and, according to LinkedIn, working for Vodafone, are pushing for ethics in the development of chatbots and conversational AI. Jabczynski critically questions companies’ one-sided focus on customer centricity. Instead, she throws a “humanity centricity” into the argumentative ring, asking, “What should a bot do, and what should it not do?” I recently had the pleasure of interviewing Maggie Jabczynski on this topic as content manager for the “Shift/CX” platform. The platform is all about customer experience management.

Jabczynski believes companies should not use chatbots for rhetorical gimmicks in automated dialogue situations. Specifically, this means that a chatbot should never be programmed to trick or entice those who use it. This is especially critical when people urgently need help or a solution to their problem, for example, in healthcare. Manipulative bots are, therefore, imperfect, especially for the image of the companies that use them. Maggie Jabczynski is consequently concerned with using chatbots for purely economic reasons and evaluating economic activity in a broader context. And that starts with the design process, which also considers ethical issues and aspects.

AI is a Tool for Solving Problems and Complex Tasks.

Artificial intelligence has arrived in many companies, and its use has become a competitive factor. AI can streamline processes, improve customer service, optimize the supply chain and save costs. Those who lag will have to leave the field for faster, more efficient competitors. AI is also a permanent presence in everyday private life. It makes life easier in many respects, for example, by relieving us of unpleasant tasks. For example, it is not only legitimate but also reasonably practical and relieving for someone with a poor sense of direction to rely on Google Maps to arrive safely at their destination. Anyone who is comfortable and therefore buys a smart refrigerator that warns of a yoghurt emergency in good time and orders the necessary quantity at the same time – based on the buying habits determined by AI – thus avoids the trip to the supermarket, which is perceived as a nuisance.

Artificial intelligence is, therefore, a tool for achieving specific goals. To solve problems and even complex tasks that the human brain could never handle. To determine forecasts and probabilities, which in turn serve as a basis for (better) human decisions. “To serve” is an appropriate keyword, which is why I am arguing at this point for three ethical premises, which seem to me indispensable, that should guide the development and use of AI in an ethical way:

● AI is and remains controllable.
● AI does not harm humans.
● AI serves humans, not the other way around.

I do not want to comment further on the individual points but instead ask you, the reader, to notice for once what goes through your mind when you read these premises. Allow yourself enough time for this, and do not try to take a position immediately! Sometimes it can be helpful to take a step back and let things sink in first.

You Don’t Have to Do Everything just Because You Can

When I look at Silicon Valley, the premises mentioned above are of relatively marginal interest to the technology giants, such as Facebook and Google. I doubt that ethical questions about social coexistence, about what is desirable for a society (What kind of society do we want to live in?), play any role for these corporate leaders.

For example, Facebook uses various forms of artificial intelligence, such as facial recognition software, which has been criticized many times in the recent past by civil rights activists for problems with accuracy. It happened that an algorithm mistook black people for monkeys, which brings us back to the phenomenon above of human biases being incorporated into algorithms. A former Facebook employee who uncovered this AI glitch told The New York Times that Facebook didn’t care enough to fix the racism problems. She said the company was content with apologies without wanting to change anything about the actual situation.

Is the general acceptance of such more than embarrassing incidents ethically justifiable? Isn’t Facebook making itself a catalyst for social division into blacks and whites, which the U.S. is already confronted with more than enough? The corporation would have the choice of dispensing with facial recognition if it didn’t work in a way that didn’t provoke social explosions. However, the tech giant is more interested in reducing us humans to consumer-hungry objects who voluntarily leave their data behind to make them believe that this gives them undreamt-of freedoms.

It’s a similar story with Google. On the outside, it presents itself as a responsible company, but on the inside, ethics don’t seem to be so far off. In 2020, Timnit Gebru, then vice chair of Google’s working group on ethics in artificial intelligence (AI), was shown the door by her employer. Anna Jobin, a researcher at the Humboldt Institute for Internet and Society in Berlin and an expert on the ethics of new technologies, gives the reason in the Swiss online news magazine swissinfo: “Timnit Gebru was hired by Google to deal with AI ethics and was fired because she dealt with AI ethics.” In an article co-authored with other researchers, the AI researcher warned about the ethical dangers of AI texting software – the basis of Google’s search engine. The language models would analyze vast amounts of text on the Internet, mainly from the Western world, creating a geographic bias. The risk of reproducing racist, sexist and offensive language would be the result. Google took this warning too far.

 
CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE
 

Human versus Artificial Brain

Against this backdrop, the “ethical AI” slogan by the Group seems like an empty phrase. It sounds good, but in reality, it is something like ethical greenwashing. The company itself has no interest in this. There is no interest in an objective, critical discussion of artificial intelligence, which includes the possibility that unpleasant results may come to light. Does Google have an ethics department only because it looks responsible to the public, while it reserves the right to fire employees who are too critical? Of course, these are ultimately unprovable speculations – even if one cannot deny that such lines of thought are apparent.

On the other hand, what is proven is that the human brain has 100 billion neurons with 1,000 neuronal connections each! The artificial, computer-aided brain “Google Brain,” a research unit of Google that aims to capture the functioning of a normal human brain with the help of deep learning systems, consists of one billion synapses simulated by about 1,000 CPU servers with a total of 16,000 cores. Nevertheless, it is nowhere near as complex as its human counterpart: artificial intelligence only has the number of synapses in the brain of an ordinary honeybee.

Let’s hope it stays that way.

I WANT PREDICTIVE ANALYTICS FOR B2B SALES.