[ad_1]
““I’m particularly worried that these models could be used for large-scale disinformation.””
That’s Sam Altman, chief executive of OpenAI, the company behind the increasingly popular — and sometimes controversial — ChatGPT application. As in the chatbot that lets users engage in seemingly meaningful dialogue with it. Or that can be used to create everything from poems to school essays.
In an interview with ABC News, Altman spoke of the promise of ChatGPT, saying it could be “the greatest technology humanity has yet developed.” But he sounded alarms as well.
In particular, he spoke about the aforementioned threat of spreading disinformation, saying ChatGPT “could be used for offensive cyberattacks.”
Another issue: ChatGPT can simply get it wrong when people ask it a question —what Altman described as the “hallucinations problem.”
“The model will confidently state things as if they were facts that are entirely made up,” he said.
At the same time, as newer iterations of ChatGPT are developed, the application appears to be getting “smarter” — for lack of a better word. According to ABC, ChatGPT “scored in the 90th percentile on the Uniform Bar Exam [for aspiring attorneys]. It also scored a near-perfect score on the SAT Math test, and it can now proficiently write computer code in most programming languages.”
Altman, however, doesn’t see any scenarios, a la sci-fi movies like “2001: A Space Odyssey,” in which artificial intelligence takes control over humans and wreaks havoc on the planet.
ChatGPT “waits for someone to give it an input,” Altman explained. “This is a tool that is very much in human control.”
[ad_2]
Source link