-
Important news
-
News
-
In-Depth
-
Shenzhen
-
China
-
World
-
Business
-
Speak Shenzhen
-
Features
-
Culture
-
Leisure
-
Opinion
-
Photos
-
Lifestyle
-
Travel
-
Special Report
-
Digital Paper
-
Kaleidoscope
-
Health
-
Markets
-
Sports
-
Entertainment
-
Business/Markets
-
World Economy
-
Weekend
-
Newsmaker
-
Diversions
-
Movies
-
Hotels and Food
-
Yes Teens!
-
News Picks
-
Tech and Science
-
Glamour
-
Campus
-
Budding Writers
-
Fun
-
Qianhai
-
Advertorial
-
CHTF Special
-
Futian Today
在线翻译:
szdaily -> Speak Shenzhen -> 
AI幻觉
    2023-12-29  08:53    Shenzhen Daily

Meaning:

This is the Chinese translation for the English term “AI hallucination.” It refers to a phenomenon wherein a large language model (LLM) — often a generative AI chatbot or computer vision tool — perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

Generally, if a user makes a request of a generative AI tool, they desire an output that appropriately addresses the prompt (i.e., a correct answer to a question). However, at times, AI algorithms generate responses diverging from the training data, misinterpret the input, or offer outputs lacking discernible patterns. In essence, the AI “hallucinates” its response. These misinterpretations arise from factors including overfitting, biases or inaccuracies within the training data, and the complexity of the model.

Example:

A: 有研究发现AI回答用药问题很不靠谱。

Yǒu yánjiū fāxiàn AI huídá yòngyào wèntí hěn bú kàopǔ。

Studies have shown that relying on AI for medical advice is not advisable.

B: 不能太依靠AI解决实际问题,因为会出现AI幻觉。

Bùnéng tài yīkào AI jiějué shíjì wèntí, yīnwèi huì chūxiàn AI huànjué。

Relying solely on AI to tackle real-world problems can be risky due to the potential for AI hallucinations.

深圳报业集团版权所有, 未经授权禁止复制; Copyright 2010-2020, All Rights Reserved.
Shenzhen Daily E-mail:szdaily@126.com