Typography

The rise of ChatGPT and similar large language model (LLM) artificial intelligence systems has generated significant attention and speculation across various domains. Some see them as potential replacements for conventional web searches, while others worry about their impact on job markets. There are even concerns about AI posing an existential threat to humanity. Amidst all this speculation, one fundamental truth is often overlooked: these AI systems, despite their complexity, rely entirely on human knowledge and labor. In other words, they cannot generate new knowledge independently. To understand this better, we must delve into how ChatGPT works and the critical role humans play in its functionality.

How ChatGPT Functions

Large language models like ChatGPT operate on a basic principle: they predict what characters, words and sentences should follow one another based on training data sets. In the case of ChatGPT, the training data consists of vast amounts of publicly available text gathered from the internet. The process is complex and resource-intensive and involves several stages. It's essential to recognize that training a language model is a supervised endeavor where the model learns to predict the probability of a word or phrase based on the context of the words that came before. It’s applied through a plethora of systems and stages, including data collection, data processing, tokenization, vocabulary creation, model architecture, initialization, training objectives, training loops, back propagation, optimization and validation.

In practice, people express a wide range of opinions and information about topics such as quantum physics, politics, health or historical events. So the question becomes: how can the model differentiate between valid and invalid statements when people provide varying perspectives?

The Need for Feedback

Users of ChatGPT can rate responses as good or bad. If a response is rated as bad, users are asked to provide an example of what a good answer should look like. ChatGPT and similar models learn what responses are deemed good or bad through such feedback from users as well as from the development team and contracted workers who also label the model's output.

ChatGPT cannot independently compare, analyze or evaluate arguments or information. It can only generate text sequences similar to those used by others when making comparisons, analyses or evaluations. It prefers responses that resemble what it has been taught are good answers in the past.

When ChatGPT provides a good answer, it draws on a significant amount of human effort that went into defining what indeed constitutes a good answer. As can be imagined, there are numerous humans behind the scenes that contribute to this process, and their continued involvement is essential for the model to improve and expand its content coverage.

What ChatGPT Cannot Do

The importance of feedback is evident in ChatGPT's tendency to produce inaccurate answers, a phenomenon often referred to as "hallucination." Without proper training, ChatGPT cannot provide accurate responses, even when accurate information is readily available on the internet. This limitation becomes apparent when asking ChatGPT about obscure or less common topics.

LLMs lack the capacity to understand or evaluate information autonomously. They rely on humans to perform these tasks. They cannot, for example, determine the accuracy of news reports, assess arguments or make informed judgments. They depend on human input for all these functions.

Furthermore, if the consensus on a particular topic changes, such as the health effects of salt or the utility of early breast cancer screenings, these models need extensive retraining to incorporate the updated consensus.

In summary, large language models like ChatGPT exemplify the dependence of AI systems on human input. They rely not only on their designers and maintainers but also on users for feedback and improvements. When ChatGPT provides a useful answer, it is a result of the collective knowledge and labor of countless individuals who contributed to its training and refinement.

Rather than being autonomous superintelligences, these AI models are tools that amplify human capabilities. ChatGPT, like all technologies, is only as powerful as the knowledge and guidance provided by the humans who shape its functionality.

The mystique surrounding ChatGPT and similar LLMs often obscures the critical role humans play in their operation. These AI systems are not independent entities; they are products of human design and ongoing human involvement. Understanding their limitations and dependencies is crucial to navigating the evolving landscape of artificial intelligence.

Pin It