Maximiliana Wynne, Modern War Institute, Feb. 3, 2023
If you spend any time on social media, listen to podcasts, or generally pay attention to the news, chances are you’ve heard of ChatGPT. The chatbot, launched by OpenAI in November, can write code, draft business proposals, pass exams, and generate guides on making Molotov cocktails. It has rapidly become one of those occasional technologies that attract so much attention and shape so many conversations that they seem to define a particular moment. It may also quickly become a threat to national security and raises a host of concerns over its potential to spread disinformation at an unprecedented rate.
ChatGPT, or Chat Generative Pre-trained Transformer, is an iteration of a language model that scans enormous volumes of web content to generate responses that emulate human language patterns. In just five days following the prototype launch, over one million users had signed up to explore and experiment with the chatbot. Although the release of ChatGPT was intended as a “research preview,” it still poses a potential threat to many users who use it to get answers on topics they do not fully grasp. A Princeton University computer science professor who tested the chatbot on basic information determined that “you can’t tell when it’s wrong unless you already know the answer.” This is why AI experts are concerned about by the prospect of users employing the chatbot in lieu of conducting their own research. Their concerns are exasperated by the fact that ChatGPT does not provide sources for any of its responses.
Before ChatGPT, inquisitive internet users would type inquiries into search engines like Google and browse search results, identifying a satisfactory answer to the query or synthesizing information from multiple sources into a satisfactory answer. Now, with ChatGPT, internet users can get instantaneous responses to natural language queries and requests, but responses that are unsourced, ultimately eliminating the possibility of having alternative viewpoints influence their perceptions. …Source