Human behaviour towards algorithms in everyday life, based on the book “Hello World – What algorithms can do and how they change our lives” by Hannah Fry.

A few words in advance about the book. Hannah Fry believes “Algorithms are everywhere, and it’s time to understand them.” A statement I agree with 100 per cent. And that’s what her work is about.

The author manages to shed light on the world of algorithms in a very entertaining way and in easy words.
Each chapter starts with an exciting example from the real or fictional world to illustrate algorithms’ aspects or problems.


This article is mainly devoted to an aspect that Hannah Fry addresses quite early in the book: The paradoxical behaviour of humans when dealing with algorithms. In my opinion, a fascinating topic. Because I partially recognized myself – in my private dealings with algorithms – but also our customers in the “business” area.
However, to bring everyone on the same level, we first discuss the question, “what are algorithms and why have we all had contact with them at some point?”

Algorithms in everyday life

In a broader sense, algorithms are “step-by-step instructions” designed to lead to a specific goal. By this broad definition, every cookbook consists of algorithms.
But that’s not how we use the term. Or have you ever said, “Do you know the algorithm for Grandma’s strawberry pie?”

The word algorithm is used in connection with mathematical objects. They are step-by-step instructions of a wide variety of mathematical operations to accomplish a particular goal. When these operations are translated into software code, one has programmed an algorithm.

There are rule-based algorithms and non-rule-based algorithms. The former are static rules set by humans. The latter are algorithms based on machine learning (a branch of artificial intelligence). Deeper information on this topic can be found in this post.

We encounter algorithms every day. They guide us to our desired destination (Navigation System). They suggest series and movies according to our wishes (Netflix & Amazon Prime), they fly us across the Atlantic (Autopilot) and much more.

So we deal with algorithms daily. In doing so, Fry observes a very paradoxical behaviour in dealing with them. On the one hand, blind trust and, on the other hand, total mistrust. When dealing with algorithms, black-and-white thinking is predominant. There is no grey. And that is a problem.

Blind trust in algorithms

As an example of blind trust in algorithms, Fry starts her chapter with Robert Jones. Robert Jones was driving home from friends one evening and was, in short, directed by his sat-nav into the middle of nowhere. The roads were getting smaller and rockier. But Jones’ navigation system indicated a road, so he kept driving. “The sat nav will be right.” Jones’ ride ended in a fence that just barely kept him from falling 30 feet. That ended sparingly.

Of course, Jones’ case is extreme. But I’m sure some have experienced similar things themselves in a mitigated way. I have. When I wanted to visit a well-known “View Point” on the Wild Atlantic Way on an Ireland road trip with a friend, the sat nav also led us quite astray.

Our gut sent warning signals pretty early on, “Is this the right place?”, “It looks weird here.” But we kept going until we were standing in front of tall grass and by a donkey. We never saw the View Point, but we did experience a story called “The Ride That Ended with a Donkey.”

Fry justifies this behaviour by saying that we are surrounded by algorithms that provide an easy way to delegate responsibility. Or when was the last time you critically examined Google results? Or used a ruler to measure whether the route displayed in the navigation system is the shortest one?

The important thing is that these algorithms (Google, navigation, etc.) are proven algorithms that usually work. In my opinion, we don’t have to feel naive because we let algorithms help us in our everyday lives. In any case, I don’t want to do without translation aids, navigation or search engines anymore.

Nevertheless, these examples show that even proven algorithms are not always correct.

So it’s not wrong to listen to your gut feeling in the right situation.

Extreme distrust in algorithms

Fry also starts this aspect with an interesting case: In 1954, Professor Paul Meehl published a study that clearly showed that mathematical algorithms – no matter how simple – almost always made better predictions than humans. And that was confirmed again and again by studies over the next (more than 60) years.

“The computer won’t be perfect, but you would only increase the error rate if you gave humans a veto over the algorithm.” As soon as many calculations are needed to make a prediction, the computer is better. In principle, this is nothing new.

But here’s the paradox: We often blindly trust algorithms we don’t understand – but as soon as we know an algorithm can make a mistake, we overreact, and all trust is gone. Every result is questioned and doubted. There is even a separate name for this phenomenon: Algorithm Aversion. Errors made by algorithms are tolerated far less than human errors – even when their mistakes are more significant.

This strong black-and-white thinking about algorithms prevents us from making the best use of new technologies. “Omniscient” versus “total garbage.”

 
CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE
 

The right way to deal with algorithms

We must learn to be more objective when dealing with technologies. The first step is, in my opinion, a basic understanding of how they work.

And that’s something we’re noticing to some extent with our customers. Our predictive sales software algorithms are being put through their paces in the testing phase. That’s the way it should be. But at some point, users (in our software, the users are salespeople in the B2B sector) should stop questioning every single, tiny forecast and researching the extent to which it can be true. They should start harvesting the low-hanging fruit and “just do it”.

Our most successful customers have incorporated predictive sales software forecasts into their sales processes.

They use our algorithms for what they are: a supporting tool in sales.

Peter Thiel goes so far in his book “From Zero to One” and says that the most successful AI models are hybrid. In other words, a mixture of humans and machines. An opinion I share.

Key Learnings

I think a conscious and objective use of algorithms can enrich and facilitate our lives. A prerequisite for this is, in my opinion, at least a basic understanding of how algorithms work, what they can “know”, and what they cannot.

When coming into contact with new algorithms, I take it upon myself to question and critically check the results. I already trust “proven” algorithms more. For example, I’m about to translate this post into another language using “Deepl”. Probably I will change a few words because, in my opinion, there is a better alternative – but I am incredibly grateful that an algorithm takes so much work off my hands.

How do you deal with algorithms? Let us know in the comments!

I WANT ALGORITHMS FOR B2B SALES.
 

Further Read:
 

Hannah Fry (2019): Hello World – Was Algorithmen können und wie sie unser Leben verändern. (German Version)

Peter Thiel (2014): Zero to One: Wie Innovation unsere Gesellschaft rettet Gebundene Ausgabe. (German Version)