Categories
Vanguard

Treatise: With ChatGPT, woe and wonder unleashed

The digital age continues to fascinate the world through artificial intelligence. But with ChatGPT, mixed sentiments are being shared about the potential of this new technology.

It is easy to understand the appeal of ChatGPT. With nothing more than a prompt and some smattering of imagination, one can get the chatbot to generate a poem, an essay, or perhaps some Python code. Indeed, OpenAI’s large language model has taken the internet by storm since its November release, with its user base exploding to over 100 million users. 

In this short time span, ChatGPT has evolved from a niche curiosity among tech enthusiasts to the focal point of discussions about the role of generative artificial intelligence (AI). Likewise, much has been said and written about ChatGPT—its abilities, its dangers, and its future prospects. A zealous few have hailed it as the beacon of an AI-powered utopia, while others look on with skepticism or outright rejection.

Amid this sea of opinions, however, one thing remains clear: ChatGPT will remain disruptive. A Pandora’s box has been opened, so to speak—and how we adapt to the zeitgeist that comes with it is of great importance.

Under the hood

ChatGPT serves as a useful program that can generate different types of text with any style, ranging from programming codes to witty poems to casual conversation. At its heart, ChatGPT is built upon the AI research laboratory’s GPT-3 class of language models, which can generate human-like text from user prompts. Unlike other GPT-3 based language models, however, ChatGPT is designed to hold conversations with its users and is much more versatile than one may assume at first chat. 

The conversational format of ChatGPT allows it to respond to questions with follow-ups, give opinions, and even decline inappropriate requests. Moreover, ChatGPT can ask its own questions to clarify information for improved answers and better accuracy. Although it is still imperfect, the capabilities of ChatGPT make it a helpful tool that is already being applied in many fields.

For instance, ChatGPT has been used as a mental health tool. Creative technologist Michelle Huang, who does mixed media work, has trained the chatbot using her old journal entries so she could engage in conversation with her younger self, allowing her to reflect and discuss with her “inner child.” To her, this allowed her to remember what she loved at a young age, giving her enough emotional space to move forward. 

ChatGPT has also provided new avenues for undereducated workers to communicate more professionally. Business consultant Danny Richman shared how he used ChatGPT to design a program that can automatically write formal emails to clients, prepare quote estimates, and reply to inquiries. He specifically designed these systems for young professionals who may have speech, literacy, or communication problems, opening up opportunities for those who may struggle with these issues.

A cornucopia of concerns

Given ChatGPT’s repertoire of abilities, it is unsurprising that many see it as a valuable, if not unready, tool—one that can automate menial tasks, guide learning, and even aid in the creative process. Much like other popular generative AI programs, however, many concerns have been raised about ChatGPT, particularly about how OpenAI moderates its outputs.

GPT-3, the language model that undergirds ChatGPT, is known to reflect the biases and dispositions present in its training dataset, which includes text crawled from the internet. Given the toxic nature of much of the internet, GPT-3 is able to emulate the hate and bigotry that are part of its training data when given prompts that are tailored to do so.

In light of this, it was recently revealed that OpenAI outsourced the job of moderating and filtering ChatGPT’s outputs to Sama, a company that styles itself as being at the forefront of ethical AI despite worker testimony to the contrary. While the poor working conditions of Sama’s employees are another discussion entirely, ethical concerns also exist about just how well it can moderate ChatGPT.

Concerningly, despite the many safeguards put in place by OpenAI, ChatGPT can still be tricked via prompt engineering into generating content that goes against its own guidelines. It can also confidently regurgitate plausible-sounding falsehoods and even generate non-existent research papers to back its claims. ChatGPT is, at its heart, a language model. It has been trained to parse and generate human-like language, not truth—all the more reason to be wary of its uncritical use.

Another point of concern about ChatGPT is its well-documented role in plagiarism by students at the high school and even university level. In fact, in writing this article, one of the authors overheard students on campus talking about using the tool to submit plagiarized work for their classes.

With its ability to generate unique text, a history professor at the University of the Philippines Diliman, Francisco Guiang, suspected that some students were using the AI tool for their final paper. The resulting exam was said to be incohesive and rambling, prompting AI detector sites to believe that it was written with AI. This sparked an online debate on the usage of AI in schoolwork: some say it could enhance learning productivity, while others argue its usage is a form of academic dishonesty. 

Third parties have already taken measures to curb this new form of plagiarism—such as the development of GPT text detectors—but these are easily circumvented by running the text through other programs. A more promising solution may come in the form of text watermarking, which OpenAI is currently developing. However, only time will tell if this intervention will prove effective and whether OpenAI will monetize the panacea to a problem with which it is inextricably linked.

Setting precedents

Muddying the waters even further is the hazy line between authorship and curatorship that arises when probing ChatGPT as a tool, which is what most of its proponents tend to view it as. That is, how transformative does a work containing text from ChatGPT have to be in order to be considered original or count as intellectual property? Is transformativity even the proper metric? Where does inspiration end and copying begin? Is it even ethical to use generative AI? 

These important questions that ChatGPT faces are questions that are shared by generative AI in general—be it Stable Diffusion or DALLE-2. While none of these questions are particularly novel, they are bound to get more difficult to answer as more people use ChatGPT in more creative and diverse ways—and as the chatbot itself evolves out of its current limitations. 

Microsoft has already rolled out an updated version of ChatGPT which is integrated with its Bing search engine. Unlike the version of the chatbot released in November, this version can crawl the internet in the present: no longer is it bound to training data from before 2022.  While this makes ChatGPT infinitely more useful, it also amplifies the dangers inherent to the technology. 

Of course, the easy answer to all of these concerns is regulation; the European Union and other jurisdictions are already considering this course of action for generative AI such as ChatGPT. While this is undoubtedly a step in the right direction, pushing paper is not guaranteed to affect meaningful change. Especially when that change targets a burgeoning technology that gets placed behind a USD-20 paywall, is projected to turn into a billion-dollar revenue stream, and is actively being integrated into a search engine that has over a billion users.

This is not to say that regulating ChatGPT and other generative AI is unnecessary; quite the contrary. Regulations, when applied and enforced well, can be a force for good, which is exactly why they must be thought through and without reverence for generating capital. What we are seeing now is only the beginning of an AI revolution, and we all share the burden of setting the right precedent regarding how we handle ChatGPT and other generative AIs, especially when they have so much potential to do harm.

At the end of the day, ChatGPT is a tool, and just like any other tool, its value is in the hands of those who wield it. The box has been opened, and the best we can do is to adapt—to approach the newfound technologies at our disposal cautiously and conscientiously.

Jasper Ryan Buan

By Jasper Ryan Buan

Liv Licardo

By Liv Licardo

Leave a Reply