Machine Learning

Machine Learning

AI writing app ChatGPT challenges not only our education system, but also our way of thinking

The University of Notre Dame's Fremantle campus is in the city's West End, which means it gets eerily quiet during the long summer break between semesters. Located at the south end of the harbor (still in use despite various relocation projects) and dominated by Georgian and Victorian architecture, the West End looks like an old photograph when students return home locked up and exhausted. to excitement. at a slower pace. Yes, cafes, shops and cocktail bars remain open across the breach, attracting tourists strolling through the Round House, a former prison built on the infamous Bentham freak show, and River Colony's first permanent Swan Building. But otherwise, the area feels eerily quiet, not quite a ghost town, but secluded in a way, with no human presence to bring it to life.

At the university, however, there is a particular rush as faculty and staff scholars return to prepare for the new semester. Like an army receiving information from a new and terrifyingly effective weapon, professors are excited about the next big thing in the field of artificial intelligence: a development that will affect modern forms of academic practice in such a potentially destructive way as to spark commercial “contract fraud.” . Make it as vintage as lullaby notes in the palm of your hand. As essay questions are asked and lesson plans are created, everyone involved in the transfer of knowledge is aware that the terrain has changed and everyone is talking about ChatGPT.

ChatGPT, launched in November 2022, is an AI-powered chatbot that uses deep learning algorithms to generate natural language responses to commands. By learning from a vast database of materials, you can instantly synthesize original content in the form of answers to specific questions, essays on specific topics, literary parodies, scripts and more. in such a way as to decisively pass the Turing test afterwards. . Mirror. However, there is very little in his responses that reveals him as a non-human actor, as many journalists have shown by showing snippets of AI-generated content in their articles and encouraging readers to notice the difference. (Given the job losses that ChatGPT and its equivalents can cause, this might be a foolhardy strategy.) Of course, in various trials, I asked him not to write anything that stood out from the "a" program. . stubborn insistence on not confusing "then" with "like" or "alternatively" with "alternatively". In fact, there is no escape from the problem - ChatGPT is great, it changes everything.

This is not necessarily seen as a problem in the service industry. Right now, opinions about ChatGPT seem to be divided between those who see it as a potential scientific research and writing tool, and those (like me) who see it as a challenge, perhaps an existential challenge to a particular educational model. . The first group is by far the smallest, although their views have been overrepresented in the media lately, perhaps due to a "balance" fetish. Essentially, these views boil down to the charge that ChatGPT concerns are a form of gambling, much like the print, TV, Internet, or Wikipedia (always lazily reviled as a reliable source in academia) panic. . incorrect information). Thus, the first group holds what is sometimes called the "instrumental" view of technology, which characterizes all tools and methods as fundamentally neutral phenomena used by people to achieve their goals, in contrast to the phenomena of power to shape culture. in which these points are enclosed in boxes. The instrumental vision is very popular in Silicon Valley and stems from a reflective belief in progress, even from a kind of fatalism that borders on nihilism for me. It also tends to implicitly assume that the human brain itself is "technology" that can be rewired for greater efficiency. For example, when proponents of new technologies claim that their use will free up space for students to perform other tasks, they are replicating the same conceptual model—let's call it the "brain-computer model"—that led AI experts to understand this dream of implementing language processing in the first place. They also grossly distort the process of creating human intelligence.

In my opinion, this has to do with the biggest problem facing ChatGPT, which has less to do with accuracy, bias, or venomous language than with the possibility of another defeat for our agency and therefore our ability to break free and thrive. As writer and artist James Bridle argues, the information revolution has ushered in a "new dark age" in which the price of ever smarter devices is getting lower and lower. In algorithmic machines, not only the methods of work disappear, but also our practical understanding of the world, how things fit together, as a result of which the world becomes a “black box”, opaque to its inhabitants. Despite the transhumanist dreams of techies and Elon Musk's warnings about the existential risks of "strong" AI, few of us are foolish enough to believe that ChatGPT is really thinking. The problem is that not everyone who uses it also thinks for real. And since thought is still the raison d'être of the university (albeit semi-official), it is a subject that goes beyond any limited concerns about accuracy or plagiarism. This is a challenge to the university itself or its liberal concept and the society it is meant to serve.

It follows that the advent of ChatGPT provides an opportunity to think more broadly about the role of technology in formal education. Why education? If your goal is to create employees who can use AI, ChatGPT training makes sense. But if your goal is to create thinking people whose thinking abilities are related to their prosperity, then it may be better not to risk it. We also shouldn't stop at ChatGPT. We can extend our reassessment to other technologies, some of which have gone into science due to COVID lockdowns. (I don't know of any academics who see online learning as effective as face-to-face classes, or anyone who thinks such teaching methods are good for student mental health.) We might even consider writing about it yourself. or at least about the role it plays in modern educational practice. A frequent move by Pollyanna is to cite Socrates as an example, who considered writing harmful to memory and distorting intellectual research, since the text cannot explain or modify its arguments as a person can. They wonder what could be more regressive than an anti-alphabet? How to reveal the technophobic spirit! But you don't have to run all the way with an Athenian slime mold to make sure that he was right about something (write atrophies) or that he asked the relevant question: what does this technology bring to us and what does it take away? ? ? We Cassandras can be a bunch of reactionaries, and of course you always have to wonder if you've confused the state of the world with the state of your lower back. However, Pollyanna's approach strikes me as naive. At the very least, it should be borne in mind that Socrates was not only the creator of what might be called the "techno-critical" tradition, but also the forerunner of the dialectical method of reasoning on which the liberal university is based.

Of course, I am not suggesting that the modern university be considered the Agora of Athens. But we are increasingly entering the age of techno-scientific capitalism, an age when it will be possible to reproduce not only human language, but also people themselves, and not by the traditional method, at the bar counter. students have to face. We must restore our ability to evaluate new technologies in the spirit of what Lewis Mumford called "democratic methods" and put human freedom and prosperity at the center of our thinking. The fact that this particular ability will kill ChatGPT if used in bad faith makes it urgent. We must start thinking about thinking machines, about the pain of a future society in which we will become strangers to each other: ghosts roaming the city of silicon, devoid of other flesh and blood, without which we can never become full-fledged people. . .

7 steps of machine learning