The artificial intelligence (AI) revolution is here, and it is bound to change the world as we know it—or so proclaims the hype following the release of OpenAI’s ChatGPT version 3.5 in November 2022, which was only the beginning. Indeed, much has happened since then with the release of the much-improved version 4.0, which was integrated into Microsoft’s Bing search engine, and the recent beta release of Google’s Gemini.Lots has since been written about what AI could mean for humanity and society, from the positive extremes of soon-here Star Trek technologies and the “zero marginal cost” society to the supposedly imminent “AI takeover” that will cause mass unemployment or the enslavement (if not extermination) of mankind. However, how much of this is fiction, and
Topics:
Per Bylund considers the following as important: 6b) Mises.org, Featured, newsletter
This could be interesting, too:
Marc Chandler writes Sterling and Gilts Pressed Lower by Firmer CPI
Ryan McMaken writes A Free-Market Guide to Trump’s Immigration Crackdown
Wanjiru Njoya writes Post-Election Prospects for Ending DEI
Swiss Customs writes Octobre 2024 : la chimie-pharma détermine le record à l’export
The artificial intelligence (AI) revolution is here, and it is bound to change the world as we know it—or so proclaims the hype following the release of OpenAI’s ChatGPT version 3.5 in November 2022, which was only the beginning. Indeed, much has happened since then with the release of the much-improved version 4.0, which was integrated into Microsoft’s Bing search engine, and the recent beta release of Google’s Gemini.
Lots has since been written about what AI could mean for humanity and society, from the positive extremes of soon-here Star Trek technologies and the “zero marginal cost” society to the supposedly imminent “AI takeover” that will cause mass unemployment or the enslavement (if not extermination) of mankind. However, how much of this is fiction, and what is real? In this three-part article series, I will briefly discuss the reality and fiction of AI, what it means for economics (and the economy), and what the real dangers and threats are. Is this the beginning of the end or the end of the beginning?
Most people’s prior experience of the term “artificial intelligence” is from science fiction books and movies. The AI in this type of media is a nonbiological conscious being—a machine man, of sorts. The intelligent machine is often portrayed as lacking certain human qualities such as empathy or ethics. However, it is also unencumbered by human limitations such as imperfect calculability and the lack of knowledge. Sometimes the AI is benign and a friend or even servant of mankind, such as the android Data in Star Trek: The Next Generation, but AI is often used to illuminate problems, tensions, or even an existential threat. Examples of such dystopian AI include Skynet in the Terminator movies, the machines in The Matrix, and HAL 9000 in 2001: A Space Odyssey.
The “AI” in our present real-world hype, such as OpenAI’s ChatGPT and Google’s Gemini, is nothing like these sci-fi “creatures”; they are nowhere near conscious beings. In fact, what we have today is so far from what we typically would call an intelligence that a new term has been invented to distinguish the “real thing” from the existing chatbots that are now referred to as “AI”: artificial general intelligence. The conscious, thinking, reasoning, and acting nonbiological creature-machines in sci-fi are artificial general intelligences. This raises the question: What is AI?
Machine Learning and Large Language Models
Present-day AI is an intelligence in the same sense as a library of books is. Both hold loads of information that are categorized in a number of different ways, such as by topic, keyword, author, and publisher. For the regular library, the books are categorized to help users find what they are looking for.
However, imagine if all the books in the library were scanned so that all the letters, words, sentences, and so on were stored together and easily searchable. This mass of content could then be categorized inductively, which means that computer software sifting through all the content would be able to figure out its own new categories based on the data themselves. What are common words and phrases? How are words combined, in what order, and in what contexts are those orders present? What phrases are more frequent in what types of books or chapters? What combinations of words are rare or do not exist? Are there differences between word use and sentence structure between authors, books, and topics?
Such inductive sifting through the content, guided by statistical algorithms, is referred to as “machine learning” and is a powerful tool to find valuable needles in informational haystacks. Note that these needles may not already be known—machine learning finds needles we know exist but can also uncover needles we had no idea existed. For example, using such techniques to go through medical data can find (and has found) correlations and potential causes of diseases that were previously unknown. Similarly, the Mercatus Center at George Mason University has fed regulatory texts through such machine learning algorithms to create RegData, a database that allows users to analyze, compare, and track regulatory burdens in the United States and beyond.
Whereas RegData is intended to support social science research on regulations, machine learning can be used on all kinds of information. When such algorithms are run on enormous amounts of text in order to figure out how language is used, it is called a large language model (LLM). These models thus capture a statistical “understanding” of how a language is used, or as Cambridge Dictionary puts it (explaining the generative pretrained transformer (GPT) LLM, on which ChatGPT is based), “a complex mathematical representation of text or other types of media that allows a computer to perform some tasks, such as interpreting and producing language, recognizing or creating images, and solving problems, in a way that seems similar to the way a human brain works.”
Indeed, based on its statistical understanding of language, an LLM chatbot can predictively generate text responses to questions and statements in a way that mimics a real conversation. It thereby gives the appearance of understanding questions and creating relevant responses; it can even “pretend” to have emotions and express empathy or gratitude based on how it understands that words can be used.
In other words, LLM chatbots like ChatGPT can arguably pass the Turing test as they make it very difficult for a human to distinguish their responses from a real human’s. Still, they are statistical prediction engines.
But Is AI Intelligent?
It is certainly an impressive feat to have software mimic human conversation to the point of tricking real humans into believing it is a person. However, the question of whether it is intelligent remains. To again refer to the Cambridge Dictionary, intelligence means “the ability to learn, understand, and make judgments or have opinions that are based on reason.” Whereas we sometimes use verbs like “learn” and “understand” for machines, they are figurative not literal uses. A pocket calculator does not “understand” mathematics just because it can present us with answers to mathematical questions or solve equations; it has not “learned” it; it also cannot “make judgments” or “have opinions.”
Certainly, AI is significantly more advanced than calculators. However, this does not take away from the fact that they are logically the same: both present results based on predetermined, prestructured, and precollected rules and data; neither of them has agency nor consciousness, and neither can create anything de novo. This is obvious for the calculator, which is comparatively stupid and only produces outputs according to simple rules of mathematics.
However, the same is true for AI. It is, of course, enormously more complex than a calculator and has the added ability to create its own categories and find relationships inductively, but it does not “have opinions that are based on [its own] reason.” It only predictively generates responses that, based on the texts that it has already processed, are statistically likely to be what a human would (or at least could) produce. This is why AI at times, despite the vast knowledge it has access to, spits out gobbledygook and has a hard time sticking to what is true. It simply cannot tell the difference. (It cannot “tell” at all.)
In other words, AI is logically speaking the very opposite of what we would expect from a human (or alien or artificial) intelligence: it is backward-looking, makes up responses based on already existing language data, and does not add anything that is not statistically (re)producible from past information. It also does not fail, flounder, or forget, and it lacks subjectivity.
An actual intelligence would of course rely on experience too, but it would have the ability to generate novel content and implications. It would be able to think anew and creatively come up with different conclusions based on the same data—an actual intelligence would forget valuable pieces of information, make errors, and use faulty inferences, and it would subjectively weigh and interpret facts—or to choose to disregard the data.
However, even though AI is arguably not an intelligence—at least not in the sci-fi sense—it does not mean that it is unimportant or lacks implications. The technological advance that it represents is nothing short of revolutionary and will have far-reaching implications for both the economy and society.
Tags: Featured,newsletter