Skip to content
Tags

,

ChatGTP Has Trouble Locating the Obvious

April 24, 2023

Artificial Intelligence (AI) and I are not the best of friends.

As a teacher for many decades, I find increasingly more of my time consumed with devising means to ensure students complete my assignments without the easy-cheat, sustain-my-own-ignorance that AI enables in today’s students– and, it seems, an increasing number of (especially remote) professionals who may be using the corner-chopping ability AI offers to even hold multiple full-time positions.

I think bosses are going to get wise to the “overemployed,” as these AI-enabled, remote, multiple-job-holders tag themselves, say, through legally-binding, formal agreements that they not use AI platforms for certain job-related tasks, or agreements that employees openly declare intention to use AI, or even via conditions that they be available for on-site work a certain percentage of the time, all of which could get dicey rather quickly if one has two, three, or even four AI-enabled, remote, full-time jobs.

I also think that AI dependence puts such users right where the AI market wants them: In a corner in which they must pay an ever-increasing fee for platform access.

AI itself is algorthm- and internet-dependent, limitations that I find deeply satisfying as a human being who hopes to encourage other human beings to foster their own creativity (and the resulting joy) and not settle for a the convenience-spew of some bot.

However, I will admit that I am curious about one particular AI product that has speedily risen in the ranks: ChatGTP. From the Open AI site:

Introducing ChatGTP

We’ve trained a model called ChatGTP which interacts in a conversational way. The dialogue format makes it possible for ChatGTP to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

“Training” an internet-crawling bot is about its programming (“if this, then that”) and the availability of (accurate? public?) information on the internet.

I decided to interact with ChatGTP on a couple of subjects, one of which is a plumbing issue.

The other subject is me.

My goal was to push the limits of ChatGTP and see how the platform handles the push.

In order to conduct this experiment, I had to create an Open AI account. With the account comes some disclaimers, including that information might be misleading:

My first subject for ChatGTP is a plumbing issue in my back yard. The short of it is that it seems the crawfish on my property may have rerouted drainage into an abandoned pipe fragment, causing water to intermittently spew like a fountain in the middle of my property. (Who among us has not been challenged by pesky crawfish?)

Regarding my plumbing issue, it is no wonder that ChatGPT repeatedly suggested I seek the assistance of a professional plumber or drainage expert. However, it did also suggest that if I want to do the work myself that I follow the “necessary safety precautions.”

These suggestions do seem to qualify as “advice.”

I asked for details about “necessary safety precautions,” and ChatGTP included on its list, “avoid working alone.”

So, I just had to go there:

I asked ChatGTP over to be my plumbing repair buddy.

Alas, the rendezvous was never meant to be:

ChatGTP apologizes for its inability to physically assist because it is not human.

Decades ago, the idea of AI being the undoing for blue-collar professions was all the talk. However, in 2023, it seems that white-collar professions that lend themselves to working remotely are most susceptible to being replaced with AI.

Being a classroom teacher throughout the pandemic, I have learned that even though I am considered a white-collar professional, my students and their parents do not want my profession to be turned into a remote experience. Students and parents both want to be physically at school.

ChatGTP is “not able to physically come over”–a basic yet crippling limitation for AI world dominance.

As for ChatGTP’s ability to provide information about me: Also not so good.

Note that I purposely did not offer information to help ChatGTP correct its errors.

Here we go:

Never attended the University of Southwestern Louisiana (SLU)(the former name of the University of Louisiana at Lafayette; name changed in 1999), though I was offered a scholarship to SLU in 1985.

My blog is deutsch29, though I did have the term “Edublog” in the headline years ago.

My books having “received critical acclaim for their insightful analysis” certainly strokes my ego; still, I consider this the kind of wordfill a marker of writing bereft of substance. Cotton candy for dinner.

I have not been a member of AERA in two decades and NCTL for even longer than that, so “active member” is a stretchy-stretch.

I have attended LAE functions, but the La. Assn. of Computer Using Educators is all *AI inaccurate mystery bonus.*

I pointed out one of the errors. In response, ChatGTP contritely tossed more errors my way:

Let’s straighten this error-laden, supposed error correction:

When I was in college for my bachelors, USL did exist. ChatGTP apparently has difficulty with the 1999 change of names from SLUto University of Louisiana at Lafayette. Chat GTP appears unable to adjust the name in relation to the time I was enrolled in college in Louisiana.

“According to her LinkedIn profile”? According to what LinkedIn profile? Sounds good, but I have yet to complete a LinkedIn profile with little more than part of my name.

My PhD is not from LSU. My bachelors is.

The rest of ChatGTP’s “correction” is pure fiction. So, let’s try to correct– without feeding ChatGTP the answer:

“According to her website, she holds a Bachelor of Arts in secondary education (TRUE), a Master of Education in gifted education (FALSE), and a PhD in curriculum and instruction (FALSE), all from the University of Southern Mississippi (FALSE).”

I am an outspoken critic as listed, but I am not the recipient of any “2015 Advocate of the Year” award from NPE.

Not sure what “her website” is to ChatGTP, but according to my “about” page on my blog, all of this bot-struggling could be settled pretty easily.

I told ChatGTP everything was wrong– a curveball, of sorts, to see if the application could sort through the information it offered; keep the correct, and ditch the errors.

I also wanted to see if it could judge its offerings and take responsibility for offering false information.

Nope, nope. But it did disclaimer itself, so to speak:

Sure. I’ll “speak to my instructor” and tell that person I lack integrity and therfore tried to use a bot to do my work instead of doing it myself, and the bot provided erroneous information.

“Prof, it’s the bot’s fault. What ‘guidance’ do you have for me now?”

Heh.

Back to “teaching” ChatGTP:

It just gets worse, like trying to lie oneself out of a lie:

Only the bachelors in secondary ed is correct. Nothing else.–not the schools, not the other majors, not the years.

This is not a matter of “publicly available information.” It’s a matter of “flat out incorrect.”

Wisconsin?

As a human, I can understand sarcasm. ChatGTP and sarcasm– not so much.

ChatGTP is asking for my help.

A little sarcasm for ChatGPT:

Y’all.

I totally did not attend the University of New Orleans.

“Definitive information” is abundantly available by googling “mercedes schneider louisiana bachelors.”

Meanwhile, ChatGTP is still asking for my help.

Instead, I’ll ask a pointed question:

After this entire exercise in error over verifiable facts that ChatGTP could not verify, it offers advice (which it does not do, according to its disclaimers) about the importance of accuracy and benefit of the doubt.

Perhaps it is asking for algorithmic mercy, which I find it difficult to offer.

After all, I’m only human.

_________________________________

Want to sharpen your digital research skills? I have a book for that!  See my latest, A Practical Guide to Digital Research: Getting the Facts and Rejecting the Lies, available for purchase on Amazon and via Garn Press!

This image has an empty alt attribute; its file name is img_1764.jpg

Follow me on Twitter (don’t be scared): @deutsch29blog

4 Comments
  1. Artificial Intelligence Research (AIR) has never been monolithic (pace HAL) and this new spate of Industrial Sweatshop Intellectual Property Stripmine (ISIPS) is a far outlying splinter of the academic labs we use to know in the before times. May it turn put to be just another flash of fool’s gold in the pan.

  2. This is so right on. ChatGPT gives generic advice also available elsewhere, but once you ask for specific facts, forget it. It’s like Wikipedia without human editors.

Trackbacks & Pingbacks

  1. Pitch AI Education? | tultican
  2. Hyped AI New Personalized Learning | tultican

Leave a comment