Google on Monday, February 6 unveiled it AI-powered chatbot Bard that will provide stiff competition to OpenAI’s ChatGPT, with the company averring that it would become “more widely available to the public in the coming weeks.”
It now appears the bot may have some challenges, as it has Bard has been adjudged by experts to have made a factual error in its very first demo.
Google had shared a GIF where Bard answered the query: “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”
Bard in its response offered three bullet points, which included the one stating that the telescope “took the very first pictures of a planet outside of our own solar system.”
The answer by Bard was incorrect with a number of astronomers pointing out that the first image of an exoplanet was taken in 2004, a claim corroborated on NASA’s website.
Grant Tremblay, an astrophysicist, reacting to Bard’s blunder had tweeted:
“Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take ‘the very first image of a planet outside our solar system.’”
The director of University of California Observatories at UC Santa Cruz, Bruce Macintosh, while also pointing out the error tweeted:
“Speaking as someone who imaged an exoplanet 14 years before JWST was launched, it feels like you should find a better example?”
Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take “the very first image of a planet outside our solar system”.
Bard’s very first answer contained a factual flub. Image: Google
Tremblay in a follow-up tweet wrote: “I do love and appreciate that one of the most powerful companies on the planet is using a JWST search to advertise their LLM. Awesome! But ChatGPT etc., while spooky impressive, are often *very confidently* wrong. Will be interesting to see a future where LLMs self error check.”
In the wake of the incorrect information by Bard, Google parent company, Alphabet had its Shares drop by nearly eight percent, with the company instantly losing $100 billion in market value.
As aptly noted by Tremblay, the major impediment of AI chatbots like ChatGPT and Bard is their possible propensity to state incorrect information as fact, with the systems frequently making up information itself as they are basically auto-complete systems.
The chatbots were thus trained on huge corpora of text and pattern analysers that determine the word that follows the next in a given sentence instead of querying a database of proven facts to answer questions.
It may therefore connote that they are probabilistic, not deterministic, that its deals on probable answers and not factual answers.
Microsoft, while showing the demo for its AI-powered Bing search engine tried albeit unsuccessfully to pre-empt these issues by placing liability on the user.
A disclaimer by the company reads thus: “Bing is powered by AI, so surprises and mistakes are possible. “Make sure to check the facts and share feedback so we can learn and improve!”
But then why do we have to check the facts all the time when we are availed with an artificial intelligence tech product offering that was said to provide basic answers to queries, majority of users asked.
Google spokesperson, Jane Park, in a statement to The Verge said: “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program. We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”