What we can learn from the most popular AI scandals

 

With AI front and centre in our collective consciousness, it’s no surprise that it has also had its fair share of controversy. AI scandals are not that numerous (yet), but they are impactful. So, today we’ll look at some famous AI scandals and what we can learn from them.

Popular AI scandals

Since the 2010s, controversial news stories relating to AI systems and their use have popped up. Here are some of the most memorable and worrisome:

• Facebook's news feed
In 2014, Facebook conducted a controversial experiment where it manipulated the content displayed in users' news feeds. The goal was to study the emotional impact this adjustment could have on its users. Naturally, the study sparked ethical concerns about informed consent and the potential for emotional manipulation. And all this was done without any of the users being aware they were part of a test.

• IBM photo-scraping
In 2019, it was discovered that IBM was using 1 million images of real human faces from Flickr's mixed-use public and private photo-sharing website to train their face recognition. Users didn’t give their informed consent to have their data used in this way. And it has highlighted some ethical concerns around how big data trains their models.

• Google Nightingale
While Google’s motive might be noble - to improve patient outcomes - the idea that they have access to millions and millions of complete patient medical records from the UK and USA should worry you. Patients weren’t informed at the time that their data was being sent to Google as part of Project Nightingale and the information wasn’t anonymised. That means there are staffers at Google with access to fully identifiable medical information who aren’t trained medical professionals.

• Amazon hiring
In 2018, Amazon made an AI recruiting tool to help them sort through and locate top candidates. It was put into widespread use and graded resumes on a star system, just like Amazon reviews. But, like many systems trained on flawed data that have human biases, it began to show favouritism towards male candidates. It started downgrading female resumes based on female-centric characteristics like ‘women’s chess club champion’. As a result, Amazon had to abandon the project.

• Lethal Autonomous Weapons (LAWs)
It will come as no surprise that some countries like the USA and Russia like the idea of using AI for military applications. And in 2018, the UN met to talk about Lethal Autonomous Weapons or LAWs. There’s a 2018 documentary that explores these issues and famous voices like Elon Musk and Steven Hawking have banded together to sound the alarm about AI that has no governance against hurting humans.

• Self-driving car accidents
You’re likely to have heard of the multiple incidents involving self-driving cars, both in test and real-life scenarios. According to KNR Legal, “In 2022, Automakers reported approximately 400 crashes of vehicles with partially automated driver-assist systems to the NHTSA. 273 of these accidents involved Teslas (the most common vehicle with self-driving capability), 70% of which used the Autopilot beta at the time. Out of the 98 self-driving crashes with injuries, 11 resulted in serious injuries. Five incidents involving Teslas were fatal.”

While that’s less than the road traffic accidents caused by human drivers, it’s still too much to gain confidence in such systems.

• Cambridge Analytica
In 2018, we all learned that a 3rd-party firm had been given access to the personal data of approximately 87 million Facebook users back in 2014. All without their consent. And this company, Cambridge Analytica, used the data to influence politics and manipulate public opinion. As a result, there’s a $725m class-action lawsuit currently underway against Meta.

• Deepfakes
The growing trend of using faked video footage of celebrities or influencers to sell products is ubiquitous on websites like TikTok and YouTube. But, there’s also concern over body autonomy, child protection and personal rights as now this technology is being used to create adult material too. In the Netflix series ‘Black Mirror’ this concept is explored further in the episode “Joan Is Awful’ where a random ‘Streamberry’ user realises they’ve signed away the movie rights to their day-to-day life when they registered for the service. Calamity ensues.

• Tay, the Twitter Bot
In 2016, Microsoft launched a fun little AI personality for users to interact with on Twitter called Tay. She was designed to learn and grow while talking with people online. But it went sour quickly. Built to interact with 18-24-year-olds, the coders at Microsoft didn’t really think about what the bot might learn. Within hours, the bot was racist and offensive, prompting Microsoft to take it offline forever.

• AI-generated art and copyrights
The emergence of AI-generated art has led to debates about copyright and ownership. There are arguments on all sides. Some say that AI does make new pieces and that work should be protected and others insist that since AI cannot have experiences, all work should be owned by the human artists that inspired it. Further still, some argue that the prompt engineers are the real artists, simply using a tool to make art like a pen - and that they should own the copyright.

 
CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE
 

Learning from the most popular AI scandals

The most important takeaway is that we still have a lot of work to do in this space. The ethics, laws and application of AI are still being decided. And we can use scandals as a rubric for how we want our future to look.

Will we have killer robots or should AI always be used for good? How should humans be treated and what can AI own? While we might not have all the answers just now, we can learn from these controversies and use them to create policies and legislation that protect everyone. For now, the approach is clear: we should only use AI to uphold the highest standards of good for humans at all times.

Interested in ethically using AI in your own business to get ahead of the marketplace? We can help. We empower your sales teams to deliver above and beyond forecasts with smart, AI-supported tools.

I WANT PREDICTIVE ANALYTICS FOR B2B SALES.
 



KI und Ethik

The Ethics of Artificial Intelligence: Why AI Needs an Ethical Basis | Part 2

 
The Ethics of Artificial Intelligence: Part 2 of 2: an in-depth plea for an ethical basis in the field of AI by our Qymatix guest author David Wolf.

If you want to read Part 1 first, click here.

Ethics in Conversational AI: What should a bot do, and what should it not do?

Artificial intelligence is like other things in life: just because you can do something doesn't mean you should. One example of this is conversational AI and, in particular, chatbots. Chatbots can be helpful in marketing, service and support. For example, these digital helpers automate a website's interaction with visitors to create an optimal user experience and quickly and efficiently handle customer inquiries even when there are no longer any flesh-and-blood employees sitting in front of the computer. But: Who pays attention to ethical aspects when developing chatbots?
Read more


AI and Ethics

The Ethics of Artificial Intelligence: Why AI Needs an Ethical Basis | Part 1

 
The Ethics of Artificial Intelligence: Part 1 of 2: a very well-researched and in-depth article by our Qymatix guest author David Wolf.

Whether in production, marketing, sales or logistics - artificial intelligence optimizes processes and takes over routine tasks. People also hope for better, i.e. more objective, decisions from AI.

That is where it gets tricky. How can machines "decide" objectively if those who develop the algorithms cannot? A plea for the ethical evaluation of AI based on its respective impact on society.
Read more