Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

ChatGPT has caused a lot of frustration and controversy since the release of Open AI in November 2022, much of it with seemingly dire consequences in our classrooms. Above all, there is concern that ChatGPT will allow, perhaps even encourage, plagiarism, and that this plagiarism will go undetected and lead to (further) erosion of educational standards.

But what is behind these fears? I will be careful not to incite (another) techno-panic, instead I advocate for universities to respond constructively to AI upstarts rather than fearmongering and defensiveness.

ChatGPT and its discontents

Chat Generative Pre-trained Transformer (ChatGPT) is a chatbot that "provides smooth and thoughtful responses to questions and queries." This technology can have a number of outcomes. He can answer questions on quantum physics. Can write poems. Oh, and not only can it "publish a credible essay on almost any topic, it's much faster than a human can write."

Like any other technological innovation, ChatGPT is not perfect. It looks like he has limited information beyond 2021 so it would be pointless to say rep. enjoy a luxurious flight. George Santos. However, its versatility and complexity, especially compared to other AI tools, have made it the subject of intense public scrutiny and even concern.

For example, The Guardian last month quoted the Vice-Chancellor as concerned about "the emergence of more sophisticated text generators, most recently ChatGPT, which appear to be able to produce highly attractive content and make recognition difficult". The story goes that some Australian universities are reverting to traditional pen and paper exams thanks to the advent of ChatGPT and similar AI developments.

For his part, West described ChatGPT as "the latest mass breach of education" and a "new threat of plagiarism" that "university leaders" are rushing to "combat".

Such fear is unfounded. Plagiarism, ghostwriting and contract fraud are serious problems in universities. As the 2022 study explains:

Assessment integrity is essential to academic integrity, and academic integrity is essential to the entire enterprise of higher education. When students receive grades for courses taught by others, the security of their skills and the value of their qualifications are undermined.

ALSO READ:Joel Mull

Threats to academic integrity are not uncommon. The study, which surveyed 4,098 students from six Australian universities and six higher education providers, found very high scores for all of the above practices.

This threat to the integrity of higher education has also gone unnoticed by higher education officials and the general public. There have been media reports of contract fraud and GPT's use of AI before chat to generate essays. Most teachers will be familiar with examples of students rewriting part of a research essay or Wikipedia and presenting it as their own work. It can also happen after a few reminders in class about the importance of properly referencing and acknowledging the work of others.

Add to that the complex and often uncertain conditions in which university teachers work. A 2021 article in The Conversation says that around 80 per cent of teaching in Australian universities is carried out by temporary staff, such as 'sessional' staff and staff on short-term contracts, with little or no guarantee of more stable or longer employment. All academics (regardless of employment status) work in a structured environment where teaching is one of a growing number of jobs.

AI hasn't created the industry's problems, but it hasn't eased the suffering of college professors either. Responding to a breach of academic integrity can be time-consuming and emotionally draining for students and faculty alike. Some violations, like those that ChatGPT says can be provoked, can go unnoticed by software designed to detect them and can be very difficult for a teacher to prove.

Want the best of religion and ethics delivered to your inbox?

Subscribe to our weekly newsletter.

Beyond Techno Panic.

My concern is that an exclusive or dominant focus on threats to the academic integrity of ChatGPT may ultimately lead to technical panic . Technopanic means that public morals and public safety are threatened by technological advances, be it smartphones, social media or artificial intelligence.

Technopanic serves many purposes. They become practical scapegoats for real and perceived social ills. This scapegoat is easy to identify. They are not human and therefore cannot respond (ChatGPT may be an exception here). The sense of techno-panic fits perfectly in the age of clickbait; although this kind of panic predates Web 2.0, the "video villain" of the 1980s is a case in point.

At the end of the day, technopany is defeat. They are naturally not interested in developing a constructive approach to technology and expect punitive and often unrealistic measures (such as deleting your social media accounts or banning AI from classrooms). Technological innovation remains a defining and, therefore, negative factor in human activity.

In fact, AI is nothing more than a human creation. Their use and misuse reflects and perpetuates social mores, values, belief systems and prejudices. A recent study argues that addressing the ethical issues surrounding artificial intelligence “requires our training at the earliest stages of our interaction with AI, whether we are developers learning about AI for the first time or users just getting introduced to it. them start interacting with the AI."

A constructive way forward

With that in mind, let me share a few ways universities can respond constructively to the growth of ChatGPT. Some of them were released. Arguably, all can be integrated into institutions outside of the ivory tower, such as elementary and secondary schools.

  • Get briefings from AI experts (academic researchers, media professionals) about ChatGPT and similar AI tools. This session can be addressed to students and staff individually. They should present a non-sensational, fact-based overview of how this technology works, its potential harms and benefits. Considering these benefits is important because AI is not entirely problematic, and it would be naïve, if not paranoid, to claim otherwise. These sessions should also allow students and staff to express their concerns without judgment and hopefully learn something new. Members of both groups will have very different understandings of ChatGPT, from those who have used the technology to those who have only encountered terrible headlines.
  • Develop clear, unambiguous institutional guidelines for student use of AI for assessment.
  • Incorporate AI into the classroom to increase learning, prepare students for the workplace, and communicate the ethical use of AI. Tama Laver confirmed this regarding the Western Australian Department of Education's decision to ban ChatGPT in public schools. Laver here particularly refers to the youth, although his words apply to students of all ages;

Education must provide our children with the important skills to ethically use, evaluate, and expand the use and results of Generative AI. Don't make them try it at home behind closed doors because our education system is paranoid that every student wants to use it to cheat in some way.

  • Make ethics classes compulsory in all courses, especially in the first year. This training can take the form of a semester or quarter course or be integrated as a module within an existing course (eg Introduction to Computer Science, Introduction to Media and Communications). Choosing to breach academic integrity by purchasing an article or using a chatbot to write an essay is an important ethical consideration; this decision depends on what you think is right and what is wrong. The same goes for decisions about whether to use technology in a good or bad way.

Each of these proposals has implications for the university's limited budget and limited time for students and researchers. Even the best and most charitable AI researchers don't want to be constantly asked to stand up and introduce chatbots to their colleagues when there are other pressing demands on their time and attention.

However, this proposal is still better than throwing up our hands and admitting defeat to our technological overlords.

Jay Daniel Thompson is Associate Professor of Professional Communication in the School of Media and Communication at RMIT University. Her research examines ways to develop ethical online communication in an age of online misinformation and the digital adversary. He creates content for digital media. Co-author of Introduction and Fake News in Digital Cultures.

placed , has been updated