-
Important news
-
News
-
In-Depth
-
Shenzhen
-
China
-
World
-
Business
-
Speak Shenzhen
-
Features
-
Culture
-
Leisure
-
Opinion
-
Photos
-
Lifestyle
-
Travel
-
Special Report
-
Digital Paper
-
Kaleidoscope
-
Health
-
Markets
-
Sports
-
Entertainment
-
Business/Markets
-
World Economy
-
Weekend
-
Newsmaker
-
Diversions
-
Movies
-
Hotels and Food
-
Yes Teens!
-
News Picks
-
Tech and Science
-
Glamour
-
Campus
-
Budding Writers
-
Fun
-
Qianhai
-
Advertorial
-
CHTF Special
-
Futian Today
在线翻译:
szdaily -> Features -> 
Hallucinate: Cambridge Dictionary’s word of the year
    2023-11-28  08:53    Shenzhen Daily

THE Cambridge Dictionary has named “hallucinate” its word of the year for 2023, reflecting a surge in interest in generative artificial intelligence (AI) tools.

The tools, which include ChatGPT and Bard, have the ability to produce plausible but false information, often referred to as AI hallucinations.

The Cambridge Dictionary — one of the world’s most popular online dictionaries for learners of English — has updated its previous definition of “hallucinate” to include the new meaning, highlighting that when AI hallucinates, it generates false information.

These AI hallucinations can sometimes appear nonsensical, but can also be misleadingly plausible. Their impact has been observed in real-world instances. For example, a U.S. law firm used ChatGPT for legal research, which led to fictitious cases being cited in court. In Google’s promotional video for its own Bard system, the AI tool made a factual error about the James Webb Space Telescope.

Wendalyn Nichols, Cambridge Dictionary’s publishing manager, said: “The fact that AIs can hallucinate reminds us that humans still need to bring their critical thinking skills to the use of these tools. AIs are fantastic at churning through huge amounts of data to extract specific information and consolidate it. But the more original you ask them to be, the more likely they are to go astray.

“At their best, large language models can only be as reliable as their training data. Human expertise is arguably more important — and sought after — than ever, to create the authoritative and up-to-date information that LLMs (large language models) can be trained on.”

The recognition of AI hallucinations in language usage signifies a shift in perceiving and anthropomorphizing AI technology.

Henry Shevlin, an AI ethicist at the Cambridge University, said: “The widespread use of the term ‘hallucinate’ to refer to mistakes by systems like ChatGPT provides a fascinating snapshot of how we’re thinking about and anthropomorphizing AI. Inaccurate or misleading information has long been with us in the form of rumors or fake news.”

Addressing and mitigating these hallucinations will play a crucial role in the adoption of generative AI, with efforts underway to cross-check outputs with reliable sources and involve human feedback in reinforcement learning to eliminate hallucinations, experts said.

(China Daily)

深圳报业集团版权所有, 未经授权禁止复制; Copyright 2010-2020, All Rights Reserved.
Shenzhen Daily E-mail:szdaily@126.com