Media around the world have recently been inundated with talks about ChatGPT, an interactive chatbot powered by machine learning developed by San Francisco-based software company OpenAI.
The chatbot’s near-impeccable performance has filled the public with ambivalence. While some hail it as a new milestone in technological development, others dread it as a threat to human interests.
Bill Gates compared the significance of ChatGPT to that of the internet. Elon Musk, one of the co-founders of OpenAI, praised it as “scary good,” adding, “We are not far from dangerously strong AI.”
Among its critics, American linguist Chomsky has completely negated its educational value, “I don’t think it has anything to do with education except undermining it. ChatGPT is basically hi-tech plagiarism.”
Unlike the enthusiasts who marvel at the omnipotence of the human-like software, I have doubts about it because of the potential problems and risks it may pose to the human society.
I am a staunch proponent of scientific and technological advancement. I have been thrilled by new inventions and discoveries, including AI, and have been a fervent fan of new products that utilize cutting-edge technology. AI’s dazzling development, however, has reached a tipping point that may have such a profound impact on humans that deliberate precautions are called for.
Take ChatGPT for example. It seems capable of anything, from writing homework essays, churning out computer codes, to preparing legal documents and presenting proposals for Twitter’s future development. Its capabilities throw humans into a dilemma: Is it helping us or taking away our jobs? In the U.S., some businesses have already acted to replace some of their employees with ChatGPT.
Governments, businesses and individuals must join efforts to create new types of jobs for the sacked employees.
With the rapid progress of AI comes the necessity of addressing some legal and ethical issues concerning the use of AI. Legislation is required to regulate the use of AI. Some boundaries must be made in a way similar to the ban on human embryo cloning.
AI’s application in education is trickier.
Educational authorities are facing the decision of whether or to what extent they should allow students to use AI in their study. ChatGPT can produce a decent essay on gun violence and control, and because of that, public schools in Seattle and New York City have banned the use of the tool over cheating concerns and its power to disrupt genuine learning. We need also be aware of the potential risks of AI, with its capacity of impersonating a certain human being, being used in frauds and blackmails.
The good news is that OpenAI in the week before last announced a new feature that may help teachers spot the presence of ChatGPT in essays and other assignments, an example of how humans can always find a way to innovate and improve their inventions.
Only we need to act quickly before things get out of control.
(The author is an English tutor and freelance writer.)