ChatGPT is OpenAI’s latest and most significant large language model (LLM), and it excels in natural language conversations. The surprisingly-capable chatbot has been in the headlines since it was first revealed in November 2022. By now, we’ve all seen many instances where the model has produced plausible and coherent text. Naturally, people have started using it to help get work done and as a way to try to make money, such as by generating content to pass off as their own. The rapid adoption of ChatGPT is because anyone can use the AI tool for free, and there’s now a $20 per month Plus plan that offers even more features, too.
As such, it is no surprise that students have started leveraging ChatGPT to do their homework. This has sparked considerable debate around AI and its role in writing and education as a whole. Some schools and universities have already announced strict bans on the use of this technology, and some educators have reported catching students who used ChatGPT to cheat. If you’re considering taking the risk yourself, there are a couple of good reasons to reconsider having an AI do your homework for you.
There are tools that can spot AI-generated text
You may argue that it’s exceedingly hard to spot whether a piece of text is generated by ChatGPT-like models, but that’s not the case. The rapid growth of the AI tool and others like it has spurred the development of similar tools designed to spot AI-generated text, and the science behind this detection is just getting started. For example, one study submitted in January 2023 claimed a 79% accuracy when using a machine learning model to spot text created by humans versus text generated by a chatbot like ChatGPT.
Teachers and professors have access to multiple tools that can quickly analyze a student’s essay and return a reliable estimate about whether it was AI-generated. One such tool is called GPTZero, which already boasts more than a million users. Another tool is called OpenAI Detector, which was designed to spot content generated by GPT-2, which was released by OpenAI in 2019. Once academic dishonesty is flagged with the help of such tools, it could result in expulsion or other severe penalties, depending on your school’s policies.
AI may give you incorrect information
OpenAI says that its work with earlier models has led to higher-quality content generation, including a decrease in false data. Despite this, ChatGPT is still prone to presenting incorrect information in a way that sounds reliable. The reason behind this is that at its core, ChatGPT is just an autoregressive model that learns to predict the next word given a certain context. Simply put, there is no reasoning or well-defined knowledge base that it leverages to generate text. Hence, the model is unreliable, at the very least, when compared to a traditional Google search. This intrinsic issue with ChatGPT could backfire when deployed in professional and academic settings.
For example, Microsoft’s Bing Chat and Google’s Bard chatbot both presented incorrect information during their respective unveilings — and in Microsoft’s case, its AI even generated entirely made-up details, but presented them in a way that sounded authoritative and trustworthy. The issue of AI-generated mistakes has already gotten some students in hot water.
Business Insider reports on a professor, for example, who figured out that his student was using AI to cheat because their essay presented factual errors in a well-written manner. A different educator explained that while AI-generated essays may be polished, they tend to be “fluffy” and lack any sort of actual substance. These obvious indicators combined with AI detection tools make using ChatGPT for homework particularly risky, and that’s unlikely to change any time soon.