AI Tools Pose Threat To Research Ethics

AI Tools Pose Threat To Research Ethics

Among the various obstacles to the integrity and integrity of academic research, perhaps the most dangerous emerging challenge is the emergence of artificial intelligence (AI) tools such as ChatGPT. The widespread use of these artificial intelligence tools has seriously threatened the scientific integrity and ethics of research. More importantly, it is known to disrupt the broader academic ecosystem that promotes research, teaching, scholarship, and general equity.

This article is an attempt to analyze this potential threat to the sanctity, authenticity, and integrity of research, especially the social sciences. Before exploring the implications of ChatGPT in terms of use and abuse, it is necessary to briefly discuss ChatGPT to understand how it works. Where does he get the information? Why and how does it become the center of attraction even in conflicts around the world?

How has this led to dramatic changes in research, both positive and negative? Are governments around the world prepared to take punitive measures against the misuse of these AI tools in research? Besides academia, what sectors are threatened by new conversational AI platforms? What if ChatGPT promotes fraud such as search plagiarism? The origin of ChatGPT dates back to 2018, when its parent organization, based in California, released the first version of OpenAI. This was followed by ChatGPT-II in 2019, ChatGPT-III in 2020 and the final version of ChatGPT-IV on November 30, 2022.

Text, images, videos, simulations, codes, etc. designed to create content such as Essays, emails, content, research papers, fiction, math worksheets, etc. can write at once. For such tasks, it requires a keyword in written or audio commands and responds accurately and efficiently within seconds. The latest version of ChatGPT-IV is the most advanced in terms of speed, coverage and efficiency. It can process 25,000 words at lightning speed with high precision search capabilities. It has registered one hundred million new users within two months since its inception, which is unprecedented growth. GPT-IV, decrypting images, simulating text, etc. is a long leap in accuracy by learning to be more precise with analytical power in near human language. The big leap towards artificial intelligence can lead to profound changes in various fields such as education ecosystem, cinema, animation world, market and technology.

It was he who revolutionized the use of artificial intelligence, expressing already serious concerns about the direct impact of the wider academy on academic integrity. By answering questions such as "keywords" or "phrases", it provides incredibly comprehensive content that has had a significant impact on existing knowledge systems. It already supports Microsoft's search engine fever; Technology giants such as Morgan Stanley Wealth Management are investing in building an information system; several online education companies use ChatGPT-IV as an automated tutor, etc. OpenAI is not alone in this change. Tech giants like Google, Meta, and Microsoft are investing billions of dollars to build their own chatbots and AI technologies. In a recent test conducted by professors at the University of Minnesota, it did well and earned a grade of "C", which is no easy task for a bot. Again, he successfully passed the law exam and the MBA exam in the United States. Various widely recognized plagiarism content checking tools or software have been developed to ensure academic integrity in the world's research system.

But this revolutionary change, which experts call "tectonic", poses a serious threat as it can disrupt the operation of all plagiarism checking systems. The reason for this concern is because of how AI tools like ChatGPT work, it can provide multiple writing styles, paraphrasing, paraphrasing and most importantly its content style can be controlled by the user in different ways. In other words, these AI tools are so sophisticated that they can bypass the tools used to combat plagiarism in research. It is important to note that ChatGPT is neither the first AI tool nor the first developed by OpenAI.

Many AI tools existed before ChatGPT: CopyAI, Writesonic, Kafkai, Copysmith, Pappertype, Articoolo, Copymatic, etc. The question is, how does ChatGPT differ from the one that has fueled growing controversy in wider academic circles? But why bother with ChatGPT when so many web browsers, search engines like Google, and data repositories like Wikipedia have been around since the 1990s? Unlike existing AI tools, ChatGPT is robust in nature, unbiased in terms of operation, efficiency and content accuracy. It is a large language model (LLM) that can instantly generate human-like text, respond to signals with amazing accuracy and intelligence, and conduct cognitive conversations from a scientist's perspective.

The most dangerous result is that ChatGPT can provide answers in different text styles and its artificial intelligence is so powerful that it can read user's thoughts and intentions and generate text suitable for users in simple or complex writing style. It goes without saying that different styles can bypass plagiarism checks in academia. Consequently, this will create an atmosphere of mistrust in the academic environment where the honest research of a passionate researcher may be questioned due to this unprecedented technological revolution, which is not wanted in a globalized world after the ICT revolution. Do governments have adequate mechanisms to address these threats to academic integrity?

In response to the unprecedented rise in cheating, several universities in countries like France, the US and India have already banned the use of such AI tools, but to no real benefit. Many researchers have also listed ChatGPT as a co-author when writing their research papers. So far, about 12,000 journals, including some well-known Nature journals, have officially banned ChatGPT from being a contributor. However, these journals, unable to keep up with technological progress, have updated their guidelines to state that AI tools can be used to improve the readability and language of a research article, but to replace the primary tasks that authors must perform, how to annotate or annotate articles. not for to obtain information. scientific discoveries. Artificial intelligence in search is a revolution, and it is undeniable. In many ways, the scientific community has to accept such a technological leap. But the question is: what can be done to ensure the integrity and integrity of scientific research?

First, in the context that a reliable, proven tool to detect fraudulent use of AI tools has yet to be invented. Last year, OpenAI launched another tool called AI-TextClassifier to help distinguish between AI-generated text and human-written text. Again, this is considered flawed, but again, OpenAI ensures that it is valid in all cases. This is the importance of valid and punitive policies in the name of scientific and cognitive research to solve these problems. A serious multi-stakeholder effort to address this unprecedented challenge and ensure academic integrity is the need of the hour.

(The author is Associate Professor, Department of Political Science, Purba Bardhaman, Galsi Mahavidyalaya).

AI may figure out how to kill people, warns 'Godfather of AI'