Don’t Shoot The Messenger: The Faults and Fortune of ChatGPT

Arthur C. Clarke, English science-fiction author, rightly described the public's perception on technological breakthroughs as "indistinguishable from magic," and OpenAI can be held directly accountable for one of the most memorable online fascinations this year.

No longer is integrating artificial intelligence (AI) in our daily routine in the peripheral vision of tech giants. It is evident from your Spotify Wrapped listening activity to the random spawning of essayists responding to your tweets. Today, each and every one of us already demonstrate a propensity to consume AI without bearing a second thought.

If that is not compelling enough on its own, the social acceptance and interest in AI is best encapsulated by the rate at which it took each major tech giant to gain traction and reach one million users.

Netflix, situated on the more conservative end of the spectrum, took three and a half years to reach one million users, whereas Facebook did so in ten months. Conversely, Spotify took half the time that Facebook did, and Instagram took even less time — at two and a half months.

It took five days for OpenAI's ChatGPT service to reach one million users.

ChatGPT's performance shook every industry to the core and signalled alarms to rethink current strategies and what constitutes a fundamental competitive edge.

AI developments will make it redundant to evaluate someone's intelligence and competency based solely on what they can recall in the workplace or in a classroom. Rather, the practice of judging a person’s abilities by their questions will gain prominence in our society.

While times have changed since Voltaire first proposed a similar line of reasoning, the genesis of chatbots scouring the web for us promises a new and unanticipated future for many industries. It begs the question of how ChatGPT will be used by current employees and most importantly, the upcoming labour force — students.

As new algorithms emerge that condense the research process by serving information to us rather than us actively sifting through sources — the way in which we are evaluated will shift as well.

The Nuts and Bolts of ChatGPT

ChatGPT's servers generate an average of four and a half billion words a day.

To interpret inputs of this caliber, ChatGPT leverages a transformer model for deciphering text.

Integrating a transformer model for AI was first introduced five years ago by a team working at Google Brain. Now, their use is exponentially growing given their superior abilities for natural language processing.

Unlike standard networks, referred to as recurrent neural networks, a transformer model is embedded with a self-attention mechanism. It is designed to imitate cognitive attention and allows for the AI to give certain sections of a text more significance relative to others.

A key point of difference is that self-attention allows a transformer model to interpret text instantaneously and in its entirety at once — a concept referred to as parallelization. Alternatively, recurrent neural networks have to interpret one word at a time and are less effective in exploiting contextual clues.

ChatGPT thus has the advantage of accurately predicting what the next word in a text will be by tracking the relationships in sequential data and improving accordingly — part of a process known as generative training.

This is what GPT ultimately stands for — Generative Pre-Trained Transformer.

 

A more intimate look into the training of ChatGPT

 

OpenAI’s Lore

Technocrat Elon Musk epitomized the organizational culture of OpenAI as following through actions that "are most likely to improve the positive future" of AI and minimize existential threat. OpenAI has an aim in ensuring that AI remains beneficial rather than a worldwide consequence. A San Francisco-based AI research laboratory, OpenAI was co-founded by Musk and other venture capitalists in Silicon Valley such as Sam Altman.

Musk has since resigned from OpenAI's board in light of potential agency problems as Tesla begins building AI into product lines. He also expressed concern over OpenAI's capital structuring shift from non-profit to capped-profit — wherein the returns of investments have a designated cut-off point — and how monetary gain can compromise OpenAI’s original mission.

OpenAI's chatbot service available to the public is the final evolution of multiple revisions, with previous launches including GPT-1 in 2018 and GPT-2 in 2019. While it has not graduated from the R&D phase just yet, users can sign up and test the functions for free.

This prompts the following thought: what happens when work generated by AI becomes indecipherable from that of a human? What are some strategies companies currently leverage to differentiate between the two?

The Uncanny Valley of AI-Generated Text

Users, undeniably, can benefit from the use of ChatGPT — whether it is ChatGPT's strict answer formatting by containing an introduction, body paragraph, and conclusion to the display of clear grammar and syntax.

However, an implication that arises with using ChatGPT is that the optimal answer it provides can be a mirage. Despite that the words employed may be correctly spelled and in the right order — the AI ultimately does not know what any of it means.

As a result, the answers are often cut-and-dry, regurgitating the talking points most commonly found online. Unless there is a human component in the interaction to tailor the responses and edit the output accordingly, text-generating algorithms tend to provide encyclopedic descriptions when answering user prompts. ChatGPT seemingly does not take a particular angle on an issue and lacks nuance.

Text that is written by AI lacks unique perspectives on issues because it is not a sentient being with its own personal experiences, biases, and thought processes. Instead, artificial intelligence relies on algorithms and data inputs to generate text, which means that it is limited to presenting information and ideas that are based on the information it has been trained on. This can lead to a lack of diversity in the perspectives and viewpoints presented in the text, as the AI may not be able to consider alternative viewpoints or challenge its own assumptions.

Another concern resides in the potential misuses of the technology in the name of self-interest. Take as an example a student who is not as developed in their writing and literacy skills and pursues the use of an AI to achieve a better grade. When OpenAI researchers first presented their paper introducing GPT-3 two years ago — they called for more research regarding AI detection software.

Current strategies to determine whether a text was written by AI is by analyzing the frequency of irregularities in the text. Each person’s writing contains its own idiosyncrasies: they might employ less common phrases, write with highly varied sentence structure, or have other quirks to the syntax or punctuation they use that contribute to a distinct voice.

Alternatively, perfect grammar, consistent sentence formats, repetitive word usage, an uncanny formal voice, unclear subjects, and describing content in an ambiguous way can serve as signs suggesting that the text was written by AI.

Given that transformer models work by predicting the next word in a sentence, transition words such as ‘the,’ ‘it,’ and ‘is’ are frequently used and if provided enough text can indicate that it was written by AI.

Ironically, an AI may fail because of how well-crafted a sentence is.

This is the basis of how detector systems determine the integrity of a text, as discovered by a team of researchers at Google Brain three years ago. But the key word is that enough text needs to be available and is based on the assumption that humans can easily identify the uncanniness of a text.

Consider the fourth paragraph in this section of the article — I told ChatGPT-3 to write about how text that is written by artificial intelligence lacks unique perspectives on issues. Did you have a feeling that the text seemed out of place? Did it frequently use transition words? Or, was there a suspension of disbelief and you immediately accepted it as part of the article? We need to develop a sense of intuition for text integrity; otherwise, detector systems are useless.

This is exactly what current research aims to reveal — our general rate of success in identifying AI generations given the constraint of our own subjective probability. Researchers at Cornell University discovered that people deemed fake articles written by GPT-2 to be 66% credible. In another study, people with little education in spotting text generated by GPT-3 were able to identify AI-written text at a probability equal with random chance.

Does this mean that we must halt all progress? Perhaps, we just have to consider that in a reality where the world is complex and consists of many variables: picking one of two extremes does not make for an optimal decision.

The Co-Existence of AI and Human Work

Thomas Kuhn, American physicist and philosopher, once said that "the successive transition from one paradigm to another via revolution is the usual development pattern of mature science," and it follows that we must appropriately adapt to current innovations for survival in the new world.

As humans, we have an inherent flair in our writing that is influenced by our own upbringing and the uniquely limited information we are each exposed to — a flair which ultimately cannot be imitated. Without our original data, ChatGPT would not be capable of performing its functions; it is entirely dependent on analyzing our inputs. However, although ChatGPT’s outputs may be uninspired, it can produce them in fraction of the time than if completed manually.

Rather than pitting the two against each other and assuming their differences negates the existence of the other — work generated by humans and ChatGPT can form a relationship of commensalism: wherein one party benefits (humans) while the other neither benefits nor is harmed (AI).

This will take different forms within each industry. Lawyers will not in fact be replaced in the legal profession because new precedents for law will always emerge and AI is limited to interpreting historical data. ChatGPT's functionality, on the other hand, may allow stored cases to be more easily uncovered by feeding it a prompt.

This may translate similarly in the education system, in which students may begin to compete on the basis of who can leverage digital tools the most effectively to come up with the right answers. It will likely require dealing with more challenging subject matter rather than rudimentary topics.

We will witness a decline in the norms of being expected to regurgitate information and receive rewards for doing so. On the other hand, the spotlight will be on individuals who can ask questions of the highest relevance and quality, possessing the right set of skills and digital tools to meet the objectives of tomorrow. Now, what question does this leave you with?

Previous
Previous

The Clean Network: The Tech War's Winning Hand

Next
Next

Dream Big, Live Small: Moving Away from Traditional Housing