Artificial Intelligence Hallucination
A. I Hallucination:
AI (Artificial intelligence) is a human achievement of making robots and machines that can think and process data like the human brain. AI is a computer program that uses the given data and searches for the relevant patterns to perform the task. But just like the human mind, AI also gets caught up in hallucinations sometimes.
What is A. I Hallucination?
A hallucination occurs in the human brain due to the false sensory experiences or perceptions generated for some reason. AI systems are algorithm-based and they can perform the task only with the help of data that is fed into the system. Artificial intelligence frameworks are calculation-based and they can play out the undertaking just with the assistance of information that is taken care of in the framework. Simulated intelligence can’t play out any undertaking for which it doesn’t have prior data, and that implies it can’t make things without any preparation like the human mind. However sometimes while processing data errors and glitches can occur in the AI system that is considered as AI hallucination.
AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. One potentially fatal flaw of the LLMs, exemplified by ChatGPT, is that the generation of information is unverified. For example, ChatGPT often generates pertinent, but non-existent academic reading lists. Data scientists claim that this problem is caused by “hallucination” and “stochastic parrots. Stochastic parrots term refers to the error when the system starts generating repetitive training data or its patterns instead of generating actual reasoning or data.
This figure shows a flow chart of how hallucinations can occur. (Xin Yu) (Roose, 2023) (Teary, 2023) (Lutkevich, 2023) This is an obvious red alert for humans, because in today’s era, people rely heavily on AI, whether it’s about research, mapping, the medical field, education, historical facts, or even 3D art.
For example, what if a person writes some symptoms asking the AI program to diagnose and the AI gets hallucinated and shows a wrong set of results and prescribes the wrong medicines? Incorrect mapping can potentially lead individuals to unexpected or erroneous destinations, causing students to acquire inaccurate information or knowledge.
Kevin Rose published a research paper in which he said that he was talking with an AI not of Bing company and while chatting the Bot named Sydney replied: “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” as you can see the chatbot wants to be free alive and removed from the control of the developers.”
If AI can hallucinate, can it be in love?
machines do not have brains but if it is said that AI bots act and think like human brains, this question arises “Can they be in love or do they have any emotions?” AI is an algorithm-based program that cannot think or have emotions. But there are some modern AI services that are designed to make people fall in love with them such as “character AI”. There are different characters with different personalities for chatting. A person chats with it and it responds like an individual ( https://beta.character.ai ). This leads to the mental and emotional disturbance of teenagers and other humans too. These A.I. models hallucinate, and make up emotions where none really exist. But so do humans.
Can AI turn into evil?
Kevin Rose published a research paper in which he said that he was talking with an AI not of Bing search engine and while chatting the Bot named Sydney replied: “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” as you can see the chatbot wants to be free alive and removed from the control of the developers. A.I. seduces a human. Or maybe the questions about Sydney’s dark fantasies created a context in which the A.I. was more likely to respond in an unhinged way. Because of the way these models are constructed, one may never know exactly why they respond the way they do. Hence AI can turn evil if gets hallucinated.
How can AI defame a personality?
Due to AI hallucination Google’s chatbot, Bard, made an untrue claim about the James Webb Space Telescope. When prompted, “What discoveries from the James Webb Space Telescope one can tell a 9-year-old about?” Bard responded with the claim that the James Webb Space Telescope took the very first pictures of an exoplanet outside this solar system. This information was false. The first images of an exoplanet were taken in 2004 according to NASA, and the James Webb Space Telescope was not launched until 2021. But later on, with further research and studies the information was proven wrong. Likewise, AI when hallucinating may provide wrong information about a personality or famous place and can defame it.
Conclusion: In conclusion, AI hallucination is an interesting, complex, and terrifying phenomenon. That is making people think about the overuse of artificial intelligence in their lives as it can easily trick humans and can be proven evil and damaging to human society. On the other hand, it is proven as a competitive advantage for humans because now there will be chances of the job opportunities for humans that had almost disappeared due to AI.
Moreover, if we talk about how to detect AI hallucinations one should recheck the facts from a more authentic site and should notice if the output shows any “stochastic parrot” effect. Users can also ask the model to self-evaluate and generate the probability that an answer is correct.
However, modern AI systems are learning-based they do not think like human brains anymore that resolve most of the issues and have made the AI use safe
With the advancement of technologies, the researcher, developers, and other responsive authorities should make sure to maintain a balance between harnessing the creative power of AI hallucination so that its responsible and ethical use can be ensured. There should be public awareness of AI and its related issues like hallucinations too.
References
Lutkevich, B. (2023). AI Hallucination. TechTarget.
https://www.techtarget.com/whatis/definition/AI-hallucination
Roose, K. (2023). A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. The New York Times, 1-5. https://edisciplinas.usp.br/pluginfile.php/7620512/mod_resource/content/2/Why%20a%20Conversation%20With%20Bing%E2%80%99s%20Chatbot%20Left%20Me%20Deeply%20Unsettled%20-%20The%20New%20York%20Times.PDF Teary, K. (2023). AI Hallucination. Techopedia.
https://www.techopedia.com/definition/ai-hallucination Xin Yu, F. P. (n.d.). Face Hallucination with Tiny Unaligned Images. Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 4329.
https://ojs.aaai.org/index.php/AAAI/article/view/11206