Generative artificial intelligence (GenAI) has forever changed how we interact with technology.
Solutions like ChatGPT can produce well-reasoned, articulate answers in seconds. Most users carefully review AI-generated content before applying or repurposing it — at least at first. Over time, it’s easy to become too trusting of GenAI. That’s when the trouble begins.
Generative artificial intelligence tools are making their way into several high-stakes fields, including industrial maintenance. While the potential upside of GenAI is huge, the risks are just as glaringly obvious.
In a recent interview, Tom Rombouts, Director of Reliability and Data-Driven Solutions at I-care, compared GenAI to a drunken uncle at a family party. Let’s unpack this eye-opening parable so you can understand how to use Rombouts’ insights to rein in unruly AI.
How AI Behaves Like a Drunken Uncle
Sir Winston Churchill once said, “The greatest lesson in life is to know that even fools are right sometimes.” GenAI isn’t just right sometimes — it’s right most of the time. Therein lies the problem.
Almost every response from tools like ChatGPT is grammatically flawless and worded using expert syntax. The responses inspire confidence, especially considering that they’re conveyed exclusively through text. Here’s where Rombouts’ drunken uncle parable comes into play.
Imagine you’re at a family gathering, and everyone is engaging in a rousing discussion. Among them is your uncle. He’s a smart man and usually gives sound advice. Today, however, he’s had too much to drink.
Your uncle speaks confidently as he shares investment advice and grand theories about the stock market. Fortunately, you can hear him slurring his speech and see him stumbling. You therefore understand that his advice may not be trustworthy in his current state.
Now imagine that you were only reading a transcript of his words, which corrected any slurs or other inconsistencies. His advice would be much more convincing if you didn’t know he was drunk.
That’s the exact problem GenAI users face. The text appears polished and highly convincing, especially if the reader doesn’t have the requisite expertise to differentiate balderdash from sound advice.
Why AI Can’t Always Be Trusted
Generative AI is built on vast amounts of data, but not all of that information is accurate or reliable. Sometimes, biased or outright false information gets integrated into responses. The result is misleading outputs that could have serious consequences for your business.
How to Sober Up Generative AI
Combating AI’s tendency to generate misleading content calls for robust quality-control mechanisms, including:
- Fact-checking
- Human oversight
- User training to identify AI errors
- Sophisticated AI tools that admit uncertainty
Furthermore, it’s important to test an AI model’s ability to differentiate good data from bad data. Intentionally feeding a model misinformation can train the program’s “BS detection.”
Use (but Always Vet) AI
Don’t let the drunken uncle parable deter you from using artificial intelligence. Your business should be using AI. Just make sure you verify the information you receive to prevent embarrassing errors.
With the right checks in place, AI can be a game-changer for your organization.
This article contributed by Tom Rombouts, Reliability Director