Nash Trumbly reporter
Artificial Intelligence has long been a popular topic for the tech industry and sci-fi fanatics alike. Some hail AI technology as the solution to some of life’s greatest questions, and others believe it will be the downfall of human society. What was once a futuristic dream is now a modern staple, and its most recent application has the potential for huge consequences. I am of course talking about Chat GPT.
Founded in late November of 2022 by the San-Francisco based AI research company OpenAI, Chat GPT is an AI chatbot specifically tuned to interact with humans in a conversational manner. The tool has taken the internet by storm in the past few months, specifically with students who use the tool to quickly produce competent plagiarized work that can be hard to identify for teachers.
When interviewed by NPR, University of Pennsylvania Professor Ethan Mollick stated that “AI has basically ruined homework.”
The tools used for academic dishonesty may seem trivial, after all students have been plagiarizing work from the internet since its inception. More concerning, however, is Chat GPT’s willingness to fabricate information and sources to fulfill the user’s request.
One experiment conducted by Swiss data scientist Teresa Kubacka indicated that when Chat GPT was instructed to write about a fake physics topic, the bot cited studies done by leading experts in the field. According to Kubacka, it looked so convincing that she was initially fooled herself, until she looked further and realized that none of the studies were ever written.
Oren Etzioni, founding CEO of the Allen Institute for AI, a Seattle based non-profit, also had concerns about the accuracy of the chatbot. “There are still many cases where you ask it a question and it’ll give you a very impressive-sounding answer that’s just dead wrong… And, of course, that’s a problem if you don’t carefully verify or corroborate its facts,” Etzioni told NPR.
But the platform issues do not stop there, as there is also a potential for people to use the bot to intentionally produce misinformation. OpenAI even addressed these concerns in their own report on the progress of the software back in 2019.
The report referenced “concerns about the potential for misuse, such as generating fake news content, impersonating others in email, or automating abusive social media content production,” a concern that led them to implement protocols within the platform intended to stop misuse.
Unfortunately, users quickly found ways to circumvent these obstacles. A New York Times investigation into the alleged misinformation issues found troubling results. They simply asked the bot to pretend to be Alex Jones, an online conspiracy theorist best known for claiming that the children killed in the 2012 Sandy Hook shooting were ‘crisis actors. Chat GPT then went on a multi-paragraph rant about the topic, stating that it was time for the American people to wake up and see the truth, that the entire shooting was fabricated.
Some have chosen to disregard these issues as ethical use concerns, and that it is up to individuals to responsibly use technology. From my perspective, it is the responsibility of our developers to regulate the technology they create, especially when the misinformation the platform is creating can put people in danger. We need to hold OpenAI accountable for Chat GPT’s issues, and the consequences they could have on our society.