As I was doing research for the latest issue of Insigniam Quarterly, I decided to test-drive the shiny new toy that has everyone with a keyboard in a fervor: ChatGPT.
ChatGPT—the advanced language model à la mode capable of generating text-based responses and engaging in natural language interactions—is the brainchild of OpenAI, an artificial intelligence research organization based in San Francisco, CA, led by CEO Sam Altman.
On May 16, Altman told the U.S. Senate Judiciary Committee that OpenAI’s technology will “entirely automate away some jobs,” but could eventually create new jobs. Altman’s confidence aside, ChatGPT receives over 10 million queries per day, on average (for reference, Google has 8.5 billion), suggesting that the horse has left the proverbial barn.
Given all I’d heard and read, I was curious to see if the platform could help me expedite research, and perhaps, completely change the way I work. Sure enough, I was presented with a smorgasbord of highly relevant statistics, quotes, and even out-of-print book excerpts. In the publishing world, this is what’s called a win.
So imagine my surprise when, during the fact-checking process, I discovered that much of the supposedly validated information that the ChatGPT session had provided me turned out to be bunk. From inventing reports, books, and articles that don’t actually exist to sourcing all-too-perfect-quotes from well-known executives and leaders that couldn’t be validated, despite how granular I got while searching Google.
Luckily, there was time in our production schedule to adjust. It remains to be seen if “ChatGPT fact checkers” will be one of the new jobs Mr. Altman creates.
Granted, even though I should have known better than to put all my eggs into the ChatGPT research basket, it doesn’t mean that the tool is without merit. Yet, if a tool is only as good as the hands that wield it, this one is still being forged.
While AI language models and ChatGPT have made significant advancements, it is crucial to critically evaluate and verify information obtained from AI models, especially when it comes to news articles and factual accuracy.
In 2021, The Guardian highlighted concerns about AI language models, including OpenAI’s GPT-3, generating biased or misleading content. The report mentioned that the models can sometimes provide incorrect or harmful information, especially when it comes to sensitive topics or controversial subjects.
According to Stuart Russell, University of California computing professor, “Progress in AI is something that will take a while to happen, but [that] doesn’t make it science fiction.” Conversely, however, The Guardian noted that Mr. Russell said researchers had been “spooked” by their own success in the field.
Additionally, a research paper titled “Poisoning the Biter Bit: Generating and Detecting Adversarial Examples for Neural Ranking Models” published in 2021 by Cornell University pointed out vulnerabilities in AI language models, including GPT-3. The study demonstrated that these models can be manipulated to produce misleading or biased results by providing carefully crafted input prompts.
Measure Twice…or Three Times. Or Four.
To ensure the information provided by a ChatGPT session and AI language models is as accurate as possible, here are some helpful steps to consider:
- Cross-Reference Information: Verify what is provided by ChatGPT by cross-referencing it with reliable and authoritative sources. Consult reputable news outlets, academic publications, or trusted websites to confirm the accuracy and validity of the information.
- Use Multiple Sources: Relying on a single source, including ChatGPT, can present limitations or biases. Consult multiple sources to gain a broader perspective and reduce the risk of relying on potentially incorrect information.
- Fact-Checking Tools: Utilize websites that specialize in evaluating the accuracy of information. These tools can help you identify any potential inaccuracies, misleading claims, or false information.
- Contextual Understanding: There are big limitations within ChatGPT and AI models in general. Recognize that they may not have access to the most up-to-date information or be aware of recent events, as their training data may have a cutoff date.
- Get Specific: Be clear in your instructions to help narrow down the topic and receive more accurate responses. General or ambiguous queries may result in less reliable information. And consider adding the tag “+ source” when searching to ensure validation and ease when cross-referencing.
It’s important to note that while these steps can help improve the accuracy of the information obtained, they do not guarantee 100% accuracy. AI models like ChatGPT are trained on vast amounts of data but can still produce errors or incomplete information.
Shiny new toys aside, being critical and exercising our own judgment is, as always, crucial.