Bias: A Very Real Issue with AI
An Interview with Chat GPT Part 2
by Mary Boza Crimmins
ChatGPT Reveals Its Flaw
Last week I shared a real conversation with the artificial intelligence app ChatGPT. Seems like an oxymoron - real and AI. Here's the link. I think you’ll find it amusing, informative, and concerning.
This week I’m delving deeper into one of the weaknesses of ChatGPT that came up twice in the interview. This is the recap.
ME: What are your weaknesses?
CHAT: Potential for biased or inaccurate information.
ME: Do you think AI is dangerous to humanity?
CHAT: The question of whether AI is dangerous to humanity is a complex and debated topic. AI, like any powerful technology, has the potential for both positive and negative impacts, depending on how it is developed, deployed, and used. Here are some perspectives to consider:
Ethical concerns: There are ethical considerations surrounding AI, such as privacy, bias, fairness, and transparency. If AI systems are not designed and governed with careful consideration of these factors, they can reinforce existing biases, infringe on privacy, or lead to unjust outcomes.
Thinking and Thoughtfulness
I’m convinced bias is one of the primary afflictions of today’s culture because it reflects a lack of thoughtfulness. Thoughtful regarding how we treat others and regarding thinking logically and rationally. Bias leads to misunderstandings, egomania, and disharmony. I’m biased. It’s a rare person who isn’t. The problem is not so much that we are biased; it’s what we do and don’t do with it.
“The tragedy is,” as Adam Grant points out in Think Again, “we’re usually unaware of the resulting flaws in our thinking.” While ChatGPT can be a helpful tool, much like my GPS app, calculator, and Grammarly, I have concerns. Chat acknowledges biases can be sustained and invigorated by its use. We humans already have so many biases. Binary bias occurs when we narrow down options or points of view to just two. Complex situations require solutions beyond this way or that way. “Desirability bias is seeing what we want to see. Confirmation bias is seeing what we expect to see.” These two combined can result in being unmotivated to change our minds. (Grant)
We’re also reminded in Think Again about how the advent of the internet was accompanied by wonderful expectations of being exposed to different points of view. Yet the internet has become a “weapon of misinformation and disinformation.” If you’ve ever played around with ChatGPT, you know it’s like a Google search on crystal meth. It quickly spurts out information. The question is will its users take time to think, analyze and do further research to avoid the bias trap?
Bye-Bye Bias
What can you do to counteract bias regardless of whether the source is ChatGPT or a human?
Read Think Again by Adam Grant. This book should be required reading.
Examine all information critically. (My husband says I’m a skeptic. I don’t believe him, though.)
Treat ChatGPT like any other source, and never rely on one source for news or other information. (If you don’t like to research, contact me. I love it!)
You must be willing to have your mind changed.
You can work on seeking different perspectives by engaging in actual conversations with live humans.
I’m not suffering from status quo bias. Change is inevitable. I’d tell my students, “The more I know, the more I know that I don’t know.” And what I know is that Chat GPT has much to offer, but the last thing we need in this world is to perpetuate bad ideas, misinformation, and unfounded assumptions. Be aware of your biases so you don’t give artificial intelligence like ChatGPT too much power. Now there’s the segway for next week’s blog - the dangers of superintelligence.
Until next time, may you live inspired!