The Good, the Bad and the Algorithmic
9 mins read

The Good, the Bad and the Algorithmic

Artificial intelligence (AI) is the hot topic these days. It’s everywhere. You probably use it every day. That chatbot you talk to about your lost package? Powered by conversational AI. The “featured” items listed under your most-purchased items on Amazon? Powered by AI/ML (machine learning) algorithms. You can even use generative AI to help write LinkedIn posts or emails.

But where do the lines end? When AI is taking care of repetitive and monotonous tasks, as well as researching and creating content at a much faster pace than any human, why do we need humans at all? Is the “human element” really required for a business to function? Let’s take a closer look at the benefits, challenges, and risks of the best person (or entity?) to do the job: a robot or a human?

Why AI works

AI has the power to optimize business processes and reduce time spent on tasks that eat into overall employee productivity and business results during the workday. Companies are already adopting AI for a variety of functions, whether it’s reviewing resumes for job applications, identifying anomalies in customer data sets, or writing content for social media.

And they can do it all in a fraction of the time it would take humans. In circumstances where early diagnosis and intervention are everything, implementing AI could have a hugely positive impact on the entire field. For example, an AI-assisted blood test can reportedly help predict Parkinson’s disease up to seven years before symptoms appear—and that’s just the tip of the iceberg.

With their ability to discover patterns in massive amounts of data, AI technologies can also support the work of law enforcement, including helping them identify and predict likely crime locations and trends. AI-based tools also play a role in combating crime and other threats in the online space and helping cybersecurity professionals perform their duties more effectively.

The ability of AI to save companies money and time is nothing new. Think about it: the less time employees spend on mundane tasks like scanning documents and uploading data, the more time they can devote to business strategy and growth. In some cases, full-time contracts may no longer be necessary, so the company will spend less money on overhead (which, understandably, isn’t great for employment rates).

AI-based systems can also help eliminate the risk of human error. The saying “we’re only human” is true for a reason. We can all make mistakes, especially after five cups of coffee, three hours of sleep, and a looming deadline. AI-based systems can work around the clock without ever getting tired. In a way, they have a level of reliability that even the most detail-oriented and methodical human can’t match.

Limitations of Artificial Intelligence

But make no mistake: things get a bit more complicated when you look closer. While AI systems can minimize errors related to fatigue and distraction, they are not infallible. AI can also make mistakes and “hallucinate,” i.e., tell lies while appearing to be correct, especially if there are problems with the data they were trained on or with the algorithm itself. In other words, AI systems are only as good as the data they were trained on (which requires human knowledge and oversight).

Continuing on this theme, while humans may claim to be objective, we are all susceptible to unconscious biases based on our own life experiences, and it is difficult, if not impossible, to turn this off. AI does not inherently create biases; rather, it can reinforce existing biases present in the data it is trained on. In other words, an AI tool trained with clean and unbiased data can actually produce data-driven results and cure biased human decision-making. That said, this is no easy task, and ensuring fairness and objectivity in AI systems requires ongoing effort in data collection, algorithm design, and constant monitoring.

llms-conversational-ai

A 2022 study found that 54% of tech leaders said they were very or extremely concerned about AI bias. We’ve already seen the devastating consequences that using biased data can have on businesses. For example, using bias data sets from an insurance company in Oregon, women pay about 11.4% more for car insurance than men – even when everything else is exactly the same! This can easily lead to reputational damage and lost customers.

As AI is fueled by vast data sets, privacy concerns arise. When it comes to personal data, malicious actors can find ways to bypass privacy protocols and gain access to that data. While there are ways to create a more secure data environment across these tools and systems, organizations still need to be vigilant about any gaps in their cybersecurity with the additional data surface that AI brings with it.

In addition, AI cannot understand emotions in the way that (most) humans do. People on the other end of an interaction with AI may feel a lack of empathy and understanding that they would otherwise get in a real “human” interaction. This can impact the customer/user experience, as demonstrated by World of Warcraft, which lost millions of players by replacing its customer support team – which used to consist of real people who would go into the game themselves to show players how to do things – with AI bots that lack humor and empathy.

With a limited data set, the lack of AI context can cause problems with interpreting the data. For example, cybersecurity experts may have a baseline understanding of a particular threat entity, allowing them to identify and flag warning signs that the machine might not be able to do so if it doesn’t perfectly align with its programmed algorithm. It’s these intricate nuances that can have huge potential for downstream consequences for both the company and its customers.

So while AI may lack context and understanding of its inputs, humans lack understanding of how their AI systems work. When AI operates in “black boxes,” there is no transparency about how or why the tool led to the results or decisions it delivered. The inability to identify the “actions” behind the scenes can cause humans to question its validity. Furthermore, if something goes wrong or its inputs are poisoned, this “black box” scenario makes it harder to identify, manage, and fix the problem.

Why do we need people

People aren’t perfect. But when it comes to talking and building relationships with people and making important strategic decisions, aren’t people the best candidates for the job?

Unlike AI, humans can adapt to changing situations and think creatively. Without the predefined rules, limited data sets, and prompts that AI uses, humans can use their initiative, knowledge, and past experiences to address challenges and solve problems in real time.

This is especially important when making ethical decisions and balancing business (or personal) goals with societal impact. For example, AI tools used in recruitment processes may not consider the broader implications of rejecting candidates based on algorithmic bias and the downstream consequences this could have on diversity and inclusion in the workplace.

Because AI output is created by algorithms, it also risks being formulaic. Consider generative AI used to write blogs, emails, and social media signatures: repetitive sentence structures can make the text clunky and less engaging to read. Content written by humans is likely to have more nuance, perspective, and, let’s be honest, personality. Especially when it comes to brand messaging and tone of voice, it can be difficult to emulate a company’s communication style using the strict algorithms that the AI ​​follows.

With that in mind, while AI may be able to provide a list of potential brand names, for example, it’s the humans behind the brand who truly understand their audience and what will resonate most. And with human empathy and the ability to “read their environment,” people can connect better with others, building stronger relationships with customers, partners, and stakeholders. This is especially useful in customer service. As mentioned later, poor customer service can lead to a loss of loyalty and trust in a brand.

Finally, humans can adapt quickly to changing conditions. If you need an urgent corporate statement about a recent event or need to deviate from a specific targeted campaign message, you need a human. AI tools take time to reprogram and update, which may not be appropriate in some situations.

What is the answer?

The most effective approach to cybersecurity is not to rely solely on AI or humans, but to leverage the strengths of both. This could mean using AI to handle large-scale data analysis and processing, while relying on human expertise for decision-making, strategic planning, and communication. AI should be used as a tool to support and augment the workforce, not replace it.

Artificial intelligence is at the heart of ESET products, allowing our cybersecurity experts to focus on creating the best solutions for ESET customers. Learn how ESET uses AI and machine learning to better detect, investigate and respond to threats.