There’s a lot of opinions flying around about the pros and cons of the future of ChatGPT, but Arvind Narayanan—a computer science professor from Princeton University—thinks the hype is still overblown at this point.
In an interview with The Markup last week, Narayanan expanded on his previous claim that ChatGPT is a “bullshit generator” in his newsletter on AI snake oil. Narayanan argues that since ChatGPT isn’t trained to produced true text, it’s trained to produce plausible text—accuracy is a side effect of its goal of being persuasive. This means that ChatGPT’s output might usually sound correct but will eventually fail spectacularly and produce blatant misinformation, which Narayanan says isn’t a good fit for something like education, or journalism.
“CNET has been publishing articles written by AI without proper disclosure, as many as 75 articles, and some turned out to have errors that a human writer would most likely not have made,” Narayanan said to The Markup. “This was not a case of malice, but this is the kind of danger that we should be more worried about where people are turning to it because of the practical constraints they face. When you combine that with the fact that the tool doesn’t have a good notion of truth, it’s a recipe for disaster.”
Narayanan likens ChatGPT’s skills to that of DALLE-E’s. He says that AI used to simply be able to make the distinction between two images, but with new technology, AI can reverse that process and create an image of a cat or a dog. ChatGPT behaves much the same way, but with text—instead of finding the differences between two passages, it can produce entirely new ones. While this may indicate that the technology and abilities of ChatGPT are still in its infancy, Narayanan is aware that people are concerned about how AI could revolutionize the workforce, or worse, cost them their job.
“Some jobs have gotten more efficient. Some jobs have been automated, so people have retrained themselves, or shifted careers. There are some harmful effects of these technologies, but we’re learning to regulate them,” he said. “Even with something as profound as the internet or search engines or smartphones, it’s turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution. I don’t think large language models are even on that scale.”
G/O Media may get a commission
ChatGPT has seen a growing wave of popularity since its public release just a few months ago, a level of popularity that has even shocked its creators. In that time, people have begun to experiment with the different skills of the chatbot. In the case of ChatGPT taking a Wharton MBA level final exam, the AI passed, but demonstrated some clear issues with 6th grade arithmetic. While the AI revolution is upon us, a complete takeover of the human workforce isn’t imminent.
Features To Consider In A Housekeeping Management Software
NCU expanding computer science and nursing programmes
Harvard professor says he gets thank-you notes from prisoners, some of which are secretly using smartphones to take his free computer-science class