"...But this intern, Teva D. Brender MD, fantasizes about having an AI ChatBot write some of her reports and discharge instructions and other paper work that increases the length of her work day. In her piece she makes the following statement about AI ChatBots:
“Finally, these programs are not sentient, they simply use massive amounts of text to predict one word after another, and their outputs may mix truth with patently false statements called hallucinations.”She gives a cite for that statement, a February 2023 NY Times article written by Kevin Roose titled: “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled”.
In that article, Roose said: “[advanced A.I. chatbots] are prone to what A.I. researchers call
‘hallucination,’ making up facts that have no tether to reality.”Roose is not the only one to notice and worry about this: Google search “AI ChatBot hallucinations“...
No comments:
Post a Comment