Is Artificial Intelligence (AI) dangerous?

Ist KI gefährlich
January 30, 2024|In Uncategorized|By Amanda Fox
As with any new technology, there will be experts that are for and against it.

Today, we’ll explore the subject of artificial intelligence more fully. What concerns are there in relation to AI?

Experts concerned about AI

From researchers to tech billionaires, there are a few voices that are worried. So, let’s explore their concerns around the unmitigated advancement of artificial intelligence.

– Elon Musk
The world’s richest person was one of the signatories to a letter this year calling for a pause in the development of large-scale AI models such as ChatGPT, the chatbot built by US company OpenAI. There are growing fears that the development of AI technology will run out of human control.

https://www.theguardian.com/technology/2023/jul/13/elon-musk-launches-xai-startup-pro-humanity-terminator-future

– Stephen Hawking
The great professor stated that AI will either be ‘the best or the worst thing, ever to happen to humanity.’ His concerns were around AI becoming a threat after it surpassed human intelligence and outperformed us.

– Sam Harris
Neuroscientist and philosopher Sam Harris has been raising the alarm about AI and its danger to us even without it wanting to. He argues that AI will become so smart it will damage human interests – almost by definition.

– Bill Gates
In his own blog, Microsoft co-founder Bill Gates expresses the need for caution and states that ‘the risks of AI are real but manageable’ as he calls for governments to take action on regulations.

– Stuart Russell
Stuart Russell, a professor of computer science at UC Berkeley and a leading AI expert argues that by making AI smarter than us, we make them more powerful than us. And they may not have the same goals as us, so regulation is needed.

As you can see, the “worried voices” all have one thing in common: the fear of an uncontrollable “strong artificial intelligence” or “general artificial intelligence”. A fear that science fiction scenarios such as “Terminator”, “Ex-Machina” or “I, Robot” will come true. At this point, you can breathe a sigh of relief. All currently existing AI systems fall into the “weak AI” category. So there is no danger in the here and now.

Voices optimistic about AI

But on the other side, there are great minds who think artificial intelligence is poised to help humanity evolve. How do they see AI ushering us into our next stage of development?

– Andrew Ng
As the co-founder of Google Brain, Andrew Ng is positive about how AI can help humanity and empower all businesses. It does this with rapid, data-driven decision-making (like how our predictive sales software works) and other innovations.

– Fei-Fei Li
An AI researcher and professor at Stanford, Fei-Fei Li regularly speaks about the positive applications of AI. It can transform industries and fix problems in healthcare and transportation if it’s fully invested.

– Ray Kurzweil
Futurist and Google’s Director of Engineering, Ray Kurzweil thinks that good people need to accelerate the progress of AI. This is because pausing innovation simply won’t be adhered to by bad actors and businesses who need the tech will fail while we wait.

– Demis Hassabis
CEO of DeepMind, Demis Hassabis, is understandably a fan of AI and how it can improve science and solve challenges. Their pioneering AlphaGo program was the first AI to beat a human pro Go player.

– Mark Zuckerberg
And lastly, Facebook’s (now Meta) co-founder Mark Zuckerberg is actively pursuing generative AI for amazing new features on Instagram and other Meta properties. He’s also expressed excitement about AI’s wider applications for fields like learning and medicine.

AI issues to monitor

The above-mentioned fear of a “takeover of humanity by artificial intelligence” is an extreme scenario. How realistic it is is debatable – ultimately, we don’t know and it cannot be denied that the possibility of such a scenario at least exists in the future.

The experts have addressed so-called “key areas” which, in addition to the fear of a strong AI, also address obvious and current “dangers”. So let’s take a look at some areas where AI may need to be adapted or regulated to protect people from danger.

– Bias

Since all humans have biases, then all AI will too. That’s because AI is trained on our data and there will be inequalities or unfairness built in that we won’t even be aware of. So, with AI, there’s a concern that it will perpetuate existing societal inequalities.

– General AI
So, general AI or strong AI is the type of AI that mimics or can even surpass human ability. It’s this type of AI that experts worry will react unreliably or shake off all bonds to act in ways that are detrimental to humans. (Like how we might overlook the needs of ants, for example.) So, there is some interest in maintaining narrow AI or weak AI. These are systems designed for a specific task that have boundaries and they are safer by nature.

– Autonomous decision making

Self-driving cars and other systems that make decisions without our input are of particular concern. When things go wrong and these AI make the wrong choice, we need to evaluate where we can update their algorithms to ensure better alignment with our values.

– Unintended consequences

Sometimes, unbridled AI can do things that we can’t anticipate (like the Chat GPT-like session where the generative system tried to convince a man to leave his partner.) So, we need to be monitoring and tweaking these systems all the time to reduce any bad outcomes.

– Job displacement

The creative fields are already feeling the crunch. AI is very good at writing, making digital art and editing photos. This is leading to job displacement in graphic design, copywriting and marketing. And, until we can cross-train these professionals, we’ll see an impact on the livelihoods of many individuals.

– Ethics

Just like with the built-in biases, we also face issues like the “trolley problem” for self-driving cars. If an AI will need to make decisions where both outcomes are bad, we need to decide, as humans, what we value and teach it to make those tough choices ‘properly’.

– Security
Without the ability to know if a user is a good or bad actor, AI systems are often more vulnerable to attacks or hacking. At the moment, we don’t have a good standard of security for prompts. With a little creativity, cybercriminals can get around current safeguards. And this is leading to security breaches and privacy violations.

CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE

Conclusion – Regulation and control of AI

One thing that most experts on either side of the argument agree on is the need for effective regulation and governance. This is to ensure that AI development is safe, helpful and legal. We’ve already seen how a lack of proper oversight is leading to worrying developments like LAWs and other military applications.

Within safe bounds, AI is helpful and useful – especially for businesses. If you’d like to see how AI and machine learning could transform your sales operations and lead to record conversions, let’s talk about your business today.

I WANT PREDICTIVE ANALYTICS FOR B2B SALES.