Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.


Since OpenAI launched its early demo of ChatGPT last Wednesday, the tool already has over a million users, according to CEO Sam Altman — a milestone, he points out, that took GPT-3 nearly 24 months to get to and DALL-E over 2 months. 

The “interactive, conversational model,” based on the company’s GPT-3.5 text-generator, certainly has the tech world in full swoon mode. Aaron Levie, CEO of Box, tweeted that “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Y Combinator cofounder Paul Graham tweeted that “clearly something big is happening.” Alberto Romero, author of The Algorithmic Bridge, calls it “by far, the best chatbot in the world.” And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from dangerously strong AI.” 

But there is a hidden problem lurking within ChatGPT: That is, it quickly spits out eloquent, confident responses that often sound plausible and true even if they are not. 

ChatGPT can sound plausible even if its output is false

Like other generative large language models, ChatGPT makes up facts. Some call it “hallucination” or “stochastic parroting,” but these models are trained to predict the next word for a given input, not whether a fact is correct or not. 

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

Some have noted that what sets ChatGPT apart is that it is so darn good at making its hallucinations sound reasonable. 

Technology analyst Benedict Evans, for example, asked ChatGPT to “write a bio for Benedict Evans.” The result, he tweeted, was “plausible, almost entirely untrue.” 

More troubling is the fact that there are obviously an untold number of queries where the user would only know if the answer was untrue if they already knew the answer to the po