There is a UP master who shared his undergraduate thesis written with ChatGPT in just half an hour, and which passed the check of a domestic check website with a check rate of less than 9%, in just half an hour. Despite this, it has also been discovered that most of the references cited in the ChatGPT paper are actually fabricated and nonexistent, and not backed up by any published research.
A study conducted by Study.com in January of this year found that 48% of students over the age of 18 used ChatGPT in order to complete quizzes, and 53% used ChatGPT in order to write papers.
While the stunning performance of ChatGPT in areas such as writing formatted papers and passing professional exams has made students cheer, it has also made universities and research institutions take the lead in worrying and being vigilant, and the anti-ChatGPT movement is becoming a new trend in academic circles as a result.
Earlier this year, the University of Hong Kong sent an internal email to all students and faculty stating that the use of ChatGPT and other AI tools is prohibited during all classes, assignments, and assessments at the University. It is possible that faculty members may require students to discuss their assignments, to take additional oral exams and tests, among other measures, if they suspect that students are using ChatGPT or other AI tools.
In a joint statement last week, the Journal of Jinan (Philosophy and Social Science Edition) and the Journal of Tianjin Normal University (Basic Education Edition) have announced that they will not accept any large language modeling tools (such as ChatGPT) either as individual authors or as co-signatories to articles.
For example, a number of universities and academic journals have published blocking notices for ChatGPT much earlier than usual in the United States and Europe.
Does “blocking ChatGPT” stop plagiarism and will it stop the spread of plagiarism if you simply block it?
It has been observed that the previously popular “AI face-swapping” application has resulted in a number of “anti-AI face-swapping” identification tools emerging to counter this behavior. Do you know whether it is feasible to know whether or not a paper was generated by ChatGPT by looking at its metadata?
RealAI, which specializes in identifying synthesized audio and video, told TechWeb that the difficulty in identifying ChatGPT-generated materials is due to the fact that the text written by humans and machines has fewer distinct features compared to text written by machines, and the results of machine synthesized text are better suited to follow the rules of human writing as far as structure and semantics are concerned. There is a great deal of difficulty to distinguish between a text that is generated by ChatGPT and one that is generated by a human simply by looking at its content since the way a human speaks can be variable and structured incorrectly. RealAI is currently in the process of building its ability to recognize synthetic text, which will be available for demo in the near future.
There is no such thing as “anti-ChatGPT” watermarks. This is wishful thinking on the part of the developer
As soon as the “anti-ChatGPT” technology stream began to gain a lot of traction, digital watermarking technology became a hot topic. What is the real potential of digital watermarking as a means of countering ChatGPT?
As far as digital watermarking is concerned, it is a technology that can be used to protect digital content, including text, images, audio, and video files, from being copied and distributed by unauthorized parties. It is a technique for embedding hidden identifiers into digital content, which do not affect the digital content itself, but can aid in establishing the true source of the content as well as identifying its copyright information.
A digital watermark can be used against ChatGPT in two different ways depending on how you see it.
In the first place, it is possible to digitally watermark ChatGPT-generated content to mark it as having been written by ChatGPT, therefore enabling anti-ChatGPT, but this will require the cooperation of AI content generation companies such as ChatGPT, i.e., their willingness to cooperate.
READ MORE: Samsung Galaxy S23 series preorders double in France, with 60% of buyers opting for the Ultra
Previously, Open AI, the company responsible for developing ChatGPT, has stated that it is considering watermarking content generated by AI systems as a way of identifying whether the text came from an AI system, but as of right now it has not been seen Open AI take such action.
Even if AI content generation companies are willing to add digital watermarks to the generated content, industry experts believe that digital watermarking is a technical tool that cannot solve all plagiarism problems. Some may use various methods to remove the digital watermark or circumvent digital watermark detection by changing the order of some words or sentences.
AI-generated content is usually generated based on a large amount of training data and model parameters. Another way to imagine using digital watermarking against ChatGPT is to digitally watermark the content to identify the copyright and prevent it from being used by AI models such as ChatGPT for training, thus preventing imitation and plagiarism.
But this idea is somewhat wishful thinking. Digital watermarking is not a complete guarantee that content will not be used to train models. Some unscrupulous people may use various techniques to try to remove the digital watermark or to bypass the detection of the digital watermark by making changes to the article. In addition, some unethical individuals may ignore the digital watermark and steal copyrighted content for training data.
These tools may be useful for “anti-ChatGPT”
There are several tools available worldwide for detecting whether an article is generated by AI, including
OpenAI GPT-3 Playground: An online application developed by OpenAI to test and explore the capabilities of the AI language model GPT-3. In this application, some text can be entered and the GPT-3 model automatically generates the next sentence or the complete article. This application can also be used to test whether an article is generated by a language model such as GPT-3.
Grover: A tool developed by the Allen Institute for Artificial Intelligence to detect fake news and falsified articles, Grover can analyze the language style and structure of an article and try to distinguish between articles generated by humans and those generated by artificial intelligence. The tool can also identify some common forgery techniques and tricks.
AI21 Writer’s Studio: An online writing tool developed by AI21 Labs that provides users with automated suggestions and editing services to help them write more fluent and accurate articles. This tool can also be used to detect whether a piece of writing was generated by artificial intelligence.
Botometer: A Twitter bot detection tool developed by Indiana University and the University of Southern California, Botometer analyzes the activity and behavior of a Twitter account to determine if it is managed by a real user or an Botometer analyzes the activity and behavior of a Twitter account to determine whether it is managed by a real user or an automated bot.
All of these tools can be used to detect whether an article or paper was generated by artificial intelligence, but it is important to note that they are not 100% accurate.
Therefore, when evaluating whether an article or paper is AI-generated, it is best to combine multiple methods and techniques for a comprehensive analysis and judgment.
Take ChatGPT as an example, Tang Jiayu mentioned to TechWeb, “To distinguish ‘ChatGPT generators’ is first, to make good use of the check mechanism and technology to prevent ChatGPT from generating academic articles through rewriting and summarizing; second, from the content itself. ChatGPT currently cannot fully guarantee the logical rigor and thematic consistency of academic articles, which can be discerned from the semantic level.”
In addition, from the case of the undergraduate paper written by ChatGPT shared by the UP master of B-site, checking the source and background of the article, if this paper comes from an unknown or untrustworthy source, it needs to be evaluated more carefully.
Regarding the plagiarism that ChatGPT may cause, Pan Xin, former COO of New Oriental Online, believes that “this concern is basically unnecessary.” “When there was no ChatGPT, there were no plagiarized papers or assignments? The problem that can be caused by technology can definitely be solved by technology + administrative means.”
In the view of students, ChatGPT is used to write papers in which data, arguments and even references are made up, and it is suggested that ChatGPT should be allowed to do some auxiliary work, such as giving an outline of the paper or advice on the general direction.
Some students think that they need self-discipline, “Anti-ChatGPT software is coming online one after another, and the algorithm will only get better and better, even if they muddle through now, it’s only a matter of time before they are found out.”