• Blog
  • Technology

Ethics and Artificial Intelligence: 3 Issues with Generative AI in Business (AI on the Move)

6 Min Read
Jenn DeRango
Senior Content Manager | Virtual Community
  • Blog
  • Technology

Ethics and Artificial Intelligence: 3 Issues with Generative AI in Business (AI on the Move)

Jenn DeRango
Senior Content Manager | Virtual Community

This piece is part of AI on the Move, a PAN series that explores the ongoing impacts of artificial intelligence on marketing and PR.

impacts of artificial intelligenceIn the first installment of this four-part series, we discussed the power of using generative AI tools like ChatGPT as a supplement to human intelligence rather than a replacement. In this latest article, we explore generative AI and the three main problems businesses must overcome to use it ethically.

Generative AI’s Ethical Business Dilemma

Work smarter, not harder. More than a mantra, these four words have the power to change not just our personal lives, but professional ones. Whether first muttered as a response to the toxic hustle-and-grind culture of our still-recent past, or as a genuine attempt to help others achieve more while doing less — doesn’t really matter.  

How we interpret and implement it in our lives, however, does. 

Choosing to run a robot vacuum instead of manually cleaning floors is one thing. Using a free or low-cost tool to write content that has the potential to spread biases and misinformation is something else entirely. 

But it would be a mistake to altogether dismiss the use of generative AI in business. Just like a marketer might use AI to supplement human output in the writing and editing process, human intelligence can be used to ensure machines aren’t only being used ethically but are returning accurate and unbiased information consistently. 

Being aware of these issues and taking steps to combat them is how businesses can overcome them.  

3 Ethical Problems with Artificial Intelligence in Business  

As we collectively ride the latest AI-craze wave, business leaders must continue to ask hard questions and take part in tough conversations surrounding the ethical issues of generative artificial intelligence. 

Ethics artificial intelligence

Here are three we’re seeing: 

The Problem: Misinformation

In its unexpected but-not-so-surprising announcement, Google finally gave its answer to OpenAI’s ChatGPT: Bard. And just like the others that came before it, the search giant’s newest tech is already raising red flags related to, you guessed it — misinformation. 

After Bard supplied inaccurate information during a live demo of its new tech last week, Google lost $100 billion in market value overnight — the equivalent to $310 per person in the United States. 

An overreaction by the public, or an ethically appropriate and thoughtful response? I’d like to argue the latter. 

Between the deluge of misinformation published online and the challenges we face deciphering fact from fiction in search of the “truth” — what level of fact checking or accuracy can we reasonably expect from generative AI users? 

After all, we live in a world where we can see a headline and share the article it’s attached to with hundreds or thousands, without ever even reading it. Integrity in journalism is up for debate and the virality of tools like ChatGPT and its currently unchecked use in business isn’t helping.  

Businesses that plan to use ChatGPT or Bard need to be stewards of accuracy and truth. Rather than hit publish on whatever the tool generates, build AI compliance checks into internal review processes to ensure content is factual.

Ethics artificial intelligence  

The Problem: Biases

Artificial intelligence is often discussed as an entity entirely void of human intelligence. Like some pie-in-the-sky concept instead of an actual product, created by actual people. 

Which, by the way, it is. 

And every person involved in building ChatGPT — from the creators to the engineers to the programmers — has left their mark on the product. Whether intentionally or not, human biases exist in generative AI in three distinct ways: 

  1. By those who built it 
  2. By those who use it 
  3. By those who write content to inform it 

Any bias or inconsistency present in AI-generated content is ultimately the responsibility of the people who train it. Marki Conway, a member of the PANid team focused on diversity, equity, and inclusion, understands this problem and how businesses can help mitigate it.

“If we want more diverse and ethical AI, it starts with having diverse programmers. It’s impossible to develop an AI without bias because biases are inherent in everything we do — but that doesn’t mean we shouldn’t try to mitigate it. That’s why it’s so critical to have a diverse team of people programming AI technology to ensure multiple perspectives.” 

Ethics artificial intelligence

As we discussed in part one of this series, every piece of content produced by tools like ChatGPT is a regurgitation of other content on the web. And because it isn’t human, the software can’t detect with accuracy if a bias indeed exists.  

Some biases in generative AI seem trivial — like ChatGPT’s supposed preference for Microsoft over Google. Other biases, however, are much more serious and consequential than which search engine someone prefers. 

Take, for instance, a simple historical question: “What happened after the pilgrims landed on Plymouth Rock?” When posed to ChatGPT, the technology returned a rather flowery description, not unlike what most of us grew up learning in school: 

“After the Pilgrims landed at Plymouth Rock in 1620, they established a settlement called Plymouth Colony in what is now Massachusetts. The first winter was difficult for the Pilgrims and many of them died from disease and starvation. However, with the help of the local Wampanoag tribe, the Pilgrims learned how to grow crops and hunt in their new environment, which helped them to survive and thrive.

In 1621, the Pilgrims held a feast to celebrate their successful harvest, which is now known as the first Thanksgiving. This feast became a tradition, and over time, it evolved into the holiday that we now celebrate in the United States every year on the fourth Thursday in November.

In the years following their arrival, the Pilgrims continued to face challenges such as conflict with other native tribes and issues with the colonial government in England, but they persevered and their colony grew and prospered. Plymouth Colony eventually became one of the original 13 colonies that would go on to form the United States of America.”

But they persevered.

This seemingly simple and straightforward explanation may seem harmless, but repeating one-sided or misrepresented versions of history and other aspects of our society is problematic. Add machines into that mix, and biases won’t just be spread, but more quickly accepted as truth.  

It is critical we recognize biases when they exist in the world and in our use of generative AI.  

Ethics artificial intelligence

Related Read — Diversity in Public Relations: Why We Need It 

The Problem: Sentience

Sentience, or the capacity to experience feelings and sensations, is an innate human ability. It’s one of the primary arguments against using generative AI in marketing — a point I publicly support. 

So, if tools like ChatGPT are incapable of having feelings and other human emotions, why are we talking about it in relation to ethics?  

Before answering that, let’s rewind to summer 2022. Google released an AI program as part of a scientific research project called LaMDA, or language models for dialogue applications. Created to produce human-sounding text, LaMDA made minor headlines that summer when a former Google engineer released company documentation citing the possibility that LaMDA was indeed sentient, or capable of human feelings. Specifically citing that “LaMDA…has worries about the future and reminisces about the past.” 

Unlike OpenAI’s ChatGPT, generative AI built on LaMDA — like Google’s Bard — uses real-time feedback and content as it’s published to answer user queries. This feature was purpose-built to help ensure a high bar for safety and quality — which should be great for ethics, right? 

The intent, yes. The execution, not so much. As artificial intelligence becomes more sentient, the line between humans and machines could not only blur, but disappear completely.  

Do I want Alexa to keep track of my grocery list? You bet.
Do I want Alexa vocally judging my diet or budgeting decisions? Absolutely not. 

Thanks to films like Ex Machina and Joaquin Phoenix’s her, the cautionary tale of what can happen when machines are programmed to mimic and/or actually possess human emotions has already been told. When AI is advanced enough to understand the nuances of how we think and feel, it could also be capable of more complex logic including the ability to lie and deceive. 

Businesses considering generative AI need to be aware of the potential for sentience, or at the very least, the illusion of it. Taking extra steps to fact check information against biases or installing a sensitivity reader can help businesses stay on the right side of history, and ethics. 

Ethics artificial intelligence

 

Related Read — Support, Not Replace: The Modern Marketer’s Approach to AI & ChatGPT 

The Solution: Ensure AI-Generated Content is Factual, Unbiased, and Authentic

People should never be fully removed from marketing roles. Even if ChatGPT-200 comes out one day with the ability to read minds, humans will still be a necessary part of the equation. 

Because of how tools like ChatGPT and Bard are programmed to function, removing the possibility for biases, misinformation, and sentience is impossible. What business leaders and marketers can do, however, is use collaborative intelligence to ensure the content being produced through generative AI is factual, unbiased, and rooted in authentic human emotions — not a machine’s impression of them. Additionally, businesses need to stay up to date on regulations and legislation to ensure compliance is maintained in this rapidly evolving market. 

In the next article of this series, we’ll be sharing actionable tips for how to use, and not use, ChatGPT in business. Subscribe to our newsletter to be the first to read it. 

We’re ready to move.
Are you?

Contact Us