Autonomous AI – when does the computer bear responsibility for errors?

Autonome KI
January 23, 2024|In Algorithms|By Selina Mendel
 
Artificial intelligence is taking over more and more tasks in companies. But what if they make a mistake? Who can be held responsible for it?

We explain how autonomy and responsibility are related and what this means for you and your company when implementing AI.

It is the year 2030 and Maria is sitting in her autonomously driving vehicle on her way to work, as she does every day. However, while she is still preparing some documents for the day’s meetings as usual, something surprising happens: the vehicle starts to swerve unusually hard to avoid a small group of pedestrians on the road. What Maria notices only after the vehicle has come to a standstill is that, to the benefit of the pedestrians, a cat had been hit by the car. Who is now responsible for this event?

The advancement of technological capabilities and the increasing complexity of algorithms enable the use of even smarter AIs. Imagine using such an innovative AI to make and implement decisions similar to Maria for locomotion only directly in your company without any human intervention in the process. Whom do you blame when a wrong decision is made that leads to massive revenue losses? Probably not the AI. Maybe the team of programmers of the AI or the manager who decided to use the AI? But neither of these parties can be blamed for the further independent development of the machine.

We are currently at a crossroad, where the AIs we have developed are already autonomous enough to function without much human intervention but not yet autonomous enough to take responsibility themselves.

We now must decide for ourselves how much responsibility we need to relinquish to enable more efficient work and how much we should keep with us to precisely prevent such situations as described above.

One possible solution is to keep the autonomy of the machines as low as possible. The complete autonomy of an AI offers a second option. How does this fit together? Let us take a closer look at this in the following sections.

When is an AI autonomous?

There are many different views on what capabilities something or someone needs to be called autonomous. However, there is consensus that some degree of self-governance and self-control must be present. At best, these are also still guided by values, beliefs, and desires.

Margaret Boden provided one of the earliest guides for verifying whether you can call a machine autonomous back in 1996. She defined three core aspects that focus specifically on the capacity for self-control.

To summarize Ms. Boden’s complex descriptions: (1) An AI must learn from its experiences, (2) evolve, and (3) be able to reflect on its own behavior in order to be autonomous.

However, if we look at various examples of AIs, it quickly becomes apparent that a precise classification into full autonomy and non-autonomy is not always possible. This is because individual requirements are usually met, but others are not. This ambiguity opens a dangerous intermediate area of autonomy that creates unclear structures.

Let’s take a closer look at the story of Microsoft’s chatbot Tay for this purpose. In 2016, Microsoft unleashed the self-learning chatbot on Twitter. Here, it learned based on human tweets and evolved. However, it quickly voiced racist, sexist, and other discriminatory statements in its tweets, forcing Microsoft to take it offline.

If we now try to measure Tay’s autonomy based on Boden’s criteria, we realize that we will not come to a clear conclusion. Tay did develop autonomously and learned from his environment. Still, he did not reflect on his behaviour at any point and could not respond to criticism or show remorse and insight. Thus, we can determine only partial autonomy for this example. Microsoft’s developers have publicly apologized in this case and accepted responsibility. But should they have done so, or were the Twitter users to blame, based on whose tweets Tay learned the behaviour?

As you can see, this partial autonomy is a big problem for the organization and the company’s responsibility attribution. However, the extreme areas of no and full autonomy are easier to handle.

When is AI autonomy a challenge, and when is it a solution?

We have already seen that semi-autonomous AI, like Microsoft’s chatbot Tay, poses some challenges for your organization. Several questions need to be answered, such as “What should AI be able to do, and where are we better off limiting it?”, “Where do we strike a balance between efficiency and loss of responsibility?”, and especially “Who bears the responsibility when something goes wrong?”.

If an AI software is not autonomous at all, the attribution of responsibility is clear. Let’s take the Qymatix predictive sales software as an example. It works with an AI that shows you price recommendations, cross-selling potentials, and possible churn risks of your B2B customers. It makes these predictions based on your company’s internal past data. Yes, the software’s algorithms are also constantly “learning” to make better predictions. So, while the software points out lucrative opportunities, it doesn’t initiate any actions itself to pursue them. The salespeople in your company are faced with the decision to follow the software’s recommendations or not, and accordingly bear the responsibility themselves.

Responsibility is similarly transparent in the case of fully autonomous systems. Such systems only exist in movies, books, and video games so far. Still, if we imagine that this could become a reality one day, we can quickly answer the question of responsibility.

You may be familiar with the video game “Detroit: Become Human.” It is set in a fictional world in 2038, where the company “Cyberlife” sells lifelike androids that increasingly replace human workers. At first, the player can observe the reasonably peaceful coexistence of the androids alongside the humans. However, the androids eventually develop their algorithms and build their own (life) will after a software error. As a result, they can feel empathy and emotions. They also commit crimes and demand equality within society. This extreme shows that with full autonomy, the AI itself could also be held responsible.

However, it will be several years before we have to deal with fully autonomous AI. Therefore, let’s take a look at how you should deal with highly prevalent semi-autonomous AI.

How should I deal with semi-autonomous AI?

If there is no or complete autonomy, responsibilities are clearly defined. However, planning and prevention are essential if semi-autonomous AI systems are to be integrated into your company.

Pre-emptively plan which areas of your business AI should be used and which areas you want to keep under complete human control.

In addition, you must also set a boundary for the areas you release for AI deployment to always ensure human takeover. As a first step, you should prioritize your business areas for this purpose and filter them according to their complexity. In particular, the essential and less complex but very time-consuming tasks are suitable to be taken over by an AI.

It would help if you also clarified the question of responsibility before the final implementation of the AI. Appoint a person responsible for AI in your company. This person should ideally be familiar with the technology and intervene and implement changes or switch the system offline at short notice in case of doubt.

 
CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE
 

Autonomous AI – Conclusion

As opaque and complex as many AI algorithms are, it is clear how little we already must worry about the dystopian scenarios of autonomous AI systems today. Still, the partial autonomy of many AIs can lead to challenges, especially when it comes to assigning responsibilities after AI missteps. Brace yourself and prepare thoroughly for such scenarios before integrating a very innovative AI into your organization. If you are unsure, resort to less autonomous AIs where you retain complete control and the attribution of responsibility is clearly defined.

I WANT PREDICTIVE ANALYTICS FOR B2B SALES.
 

Further Read:
 

Beate Rössler (2017): Autonomie – Ein Versuch über das gelungene Leben (German Language)

Bernd Graff (2016): Rassistischer Chat-Roboter: Mit falschen Werten bombardiert (German Language)

Kaspar Molzberger (2020): Autonomie und Kalkulation (German Language)

Margaret A. Boden (1996): Autonomy and Artificiality

Michael Wheeler (2020): Autonomy

Peter Lee (2016): Learning from Tay’s introduction

Sony (2018): Detroit: Become Human

World Economic Forum (2020): Model Artificial Intelligence Governance Framework and Assessment Guide