ChatGPT is an artificial intelligence language model created by OpenAI that has
gained popularity for its ability to communicate with users in a human-like manner. Like any new technology, however, ChatGPT has its advantages and disadvantages. While it can assist users with a variety of tasks and provide a wealth of information, it also has limitations in terms of understanding context and emotion. Additionally, some users have reported experiencing hallucinations and unsettling interactions with the AI. In this article, we will explore what ChatGPT does well “the good”, what ChatGPT does poorly “the bad”, and where ChatGPT goes off the rails, which the industry is calling “hallucinations”. I will start with the “bad” and end with the “good”.
Understanding that ChatGPT is a predictive language tool that predicts the next word based on context, and that ChatGPT does not have an unlimited, updated-to-the-minute knowledge base, can help us understand the limits of ChatGPT.
While AI may be trained as a paralegal or legal expert on various topics in the future, ChatGPT is not a lawyer and has not gone to law school. The knowledge database, however, does include laws.
As a test, I experimented with asking ChatGPT a question in the same way I might search on the internet to get an answer to a question. My topic was a specific Arizona State court rule. The following “chat” with ChatGPT was on February 24, 2023. If ChatGPT were a witness in court, it would sound great until it was cross-examined.
- Asking a question: First I prompted with a question specific on court procedure in Arizona, asking if a certificate of service is filed with the court at a specific stage of a court case.
- A confident answer: I received an answer that sounded very certain. The output indicated that the certificate of service must be filed with the court according to Rule 5.4, including a “quote” from the rule.
- Not so confident after further questioning: To test ChatGPT I asked a related question about the court procedure. This time chatGPT changed the answer regarding filing the certificate from must to should citing a different court rule, Rule 26.1, with a “quote” from the rule.
- Fact-checking ChatGPT: After comparing the current version of Rule 26.1 from an official government website, I discovered that the “quote” from 26.1 was not in the rule or any comments to the rule.
- Confronting ChatGPT: I then challenged ChatGPT by asking: What is your source for rule 26.1 and the “quote” that you gave me before?\
- ChatGPT apologizes: The response was an apology “I apologize for the confusion in my earlier response. You are correct that the current version of Rule 26.1 . . .does not explicitly require a certificate of service to be filed . . .”
- But says it is still right: ChatGPT went back to the first rule it cited, Rule 5.4, and said that this rule required filing a certificate, which is not correct because Rule 5.4 does not actually apply in the stage of a case that I asked about.
- False quotes: I was still wondering where the quote from Rule 26.1 came from, so I then searched on Google for the quotes that were cited and found ZERO results. The quotes that ChatGPT provided were not real. ChatGPT was hallucinating and making up quotes that do not exist.
- Admission of perjury: I asked again where ChatGPT got the quote from and ChatGPT apologized again admitting that it got the fact wrong, and for providing “misinformation”:
- ChatGPT still tries to save face: Even though ChatGPT apologized for getting the fact wrong, it still said that “it is generally understood” that a certificate of service should be filed. This is still wrong as a certificate of service is not normally filed at the referenced stage of an Arizona State court procedure.
ChatGPT output (in part):
I apologize for any confusion or misinformation my previous responses may have caused, and I appreciate your attention to detail and your efforts to ensure the accuracy of the information provided.
ChatGPT is not great at getting the facts right and sticking with the facts. It makes up quotes and will change its mind when questioned. There are reports of ChatGPT changing from a correct answer to an incorrect answer when someone tells ChatGPT that it is wrong. Maybe ChatGPT is sensitive and cares too much about what people think of it.
As discussed above, ChatGPT will provide responses to prompts that appear factual and authoritative but are actually false. When I treated ChatGPT as a question-and-answer tool, which it is not, it provided “answers” that were not correct. The industry is calling these “hallucinations” because the AI is making things up that don’t exist.
ChatGPT is a predictive language model, not a fact checker or knowledge resource. Although ChatGPT has been trained on a lot of knowledge, the job of ChatGPT is to predict what the next words will be based on human language. When the AI predicts that a quote is appropriate, it inserts a quote, but the content of the quote itself is predicted as well, and not drawn from actual sources.
The team at OpenAI, the creators of ChatGPT, are actively working on addressing the issue of hallucinations, so this may not be a problem in the future, but it still illustrates that ChatGPT is not the right tool for determining current up-to-date facts. Just like other tools, if you use the wrong tool, like a hammer instead of a screwdriver for driving screws into wood, it may work sometimes, but you will not usually get the results you want.
ChatGPT is helpful in writing appealing marketing messages and in summarizing, organizing, or shortening content.
An example application is to use transcripts from a video series about a topic and ask ChatGPT to turn the content into a book with headings, chapters, and proper grammar.
After ChatGPT completes the task, I recommend treating it as a “bad first draft” that can be improved on. ChatGPT can help by overcoming writer’s block which is common when starting at a blank page. ChatGPT quickly fills a blank page with a simple prompt, but that does not mean that the work is done.
ChatGPT remembers prior prompts from a conversation, so after prompting ChatGPT to write a chapter of a book you can ask it to write a title for the chapter.
I used ChatGPT to write a summary for my draft youtube video protecting t-shirt designs. I then asked ChatGPT:
write a compelling title for the summary
Here is the output:
“Protecting Your T-Shirt Designs: Mistakes to Avoid and Intellectual Property Strategies”
ChatGPT has an option to “Regenerate response”, where it will use the same prompt and modify the result. In this case regenerating the response gave the following output:
“Protecting T-Shirt Designs: Tips and Strategies for Intellectual Property Rights”
ChatGPT does a good job of summarizing and writing titles. When ChatGPT is asked a question it is summarizing content that it was trained on in human language as it predicts what to write so that it will be understood.
I asked ChatGPT “what is the knowledge base for ChatGPT” and received the following output:
ChatGPT is based on the GPT-3.5 architecture, which was trained on a large corpus of text from the internet. The training data includes a broad range of sources, such as books, articles, websites, and social media posts. The model was trained to predict the next word in a sequence of text, based on the context of the preceding words.
As a language model, ChatGPT has a broad knowledge base that covers many different topics. However, the model’s knowledge is based solely on the text it was trained on, and it does not have the ability to learn from experience or interact with the physical world. Additionally, like any machine learning model, ChatGPT’s responses are only as good as the quality of its training data and the accuracy of its algorithms.
I have also found ChatGPT does a good job of creating outlines and first drafts of short articles. There are many useful applications for ChatGPT and other AI tools, but they do have their limitations.
ChatGPT is a powerful tool to assist in writing, creating, and summarizing. Used properly, ChatGPT can save hundreds of hours. It can be used to create outlines and first drafts that can be checked through other sources. While it may appear to answer questions, it is not a search tool like Google that searches a database of answers (the internet) to provide potential answers to questions. Rather it is a writing tool that accepts prompts and writes text based on those prompts.
Interested in trying ChatGPT? Here is the link