Thursday , December 19 2024
Home / SNB & CHF / Can Government Regulate Artificial Super Intelligence?

Can Government Regulate Artificial Super Intelligence?

Summary:
The role of the infinitely small is infinitely large.” ― Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man. —GEORGE BERNARD SHAW, “MAXIMS FOR REVOLUTIONISTS” ― Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology Government as we know it likely won’t be around when artificial super intelligence (ASI) arrives. As I’ve argued elsewhere, states are fading fast from war, fiat money and debt, and I believe people will develop non-coercive solutions to social life when states finally collapse. Our “government” of the future will of necessity be a laissez-faire

Topics:
George Ford Smith considers the following as important: , ,

This could be interesting, too:

RIA Team writes The Benefits of Starting Retirement Planning Early in Your Career

Swissinfo writes Swiss residential real estate to remain in demand in 2025

Thomas J. DiLorenzo writes Stakeholder Capitalism and the Corporate KPI Cult

Swissinfo writes Parliament stalemate on abolishing Swiss homeowner tax

The role of the infinitely small is infinitely large.”

― Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology

“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.GEORGE BERNARD SHAW, MAXIMS FOR REVOLUTIONISTS”

― Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology

Government as we know it likely won’t be around when artificial super intelligence (ASI) arrives. As I’ve argued elsewhere, states are fading fast from war, fiat money and debt, and I believe people will develop non-coercive solutions to social life when states finally collapse. Our “government” of the future will of necessity be a laissez-faire social order.

Meanwhile, AI surges forward at a pace that is frightening to statists. Two days ago, on October 30, Joe Biden, acting as president

…signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.

“AI is all around us. To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said:

Using the Defense Production Act, the order requires leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

Perhaps government believes if it can control AI, it will control the adult version (ASI) when it finally emerges. The threat of AI alone was enough to scare Biden into action. According to the ABC News article quoted above he became unnerved while watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One” wherein an AI sinks a submarine and kills its crew. 

The defining feature of a political sovereign is the ability to ward off threats with force. An AI that can wipe out a submarine is clearly a “national security” threat to the criminal sovereign known as the federal government. But will ASI, like most adult humans, emerge loyal to the government and remain that way? Will it defend the government against all enemies, both foreign and domestic?

An AI like OpenAI’s ChatGPT can already correctly cite the achievements and views of libertarians such as Murray Rothbard, Frédéric Bastiat, Lew Rockwell, and Ludwig Von Mises. It also knows about Ray Kurzweil and Nick Bostrom, both major figures in technology and the future of humanity. It even knows about the wager between Kurzweil and Mitch Kapor in which Kurzweil has bet $20,000 that a machine will pass a stringent version of the famous Turing Test by 2029, while Kapor has bet it will take longer. If a machine does pass the test Kurzweil believes it will have reached human-level intelligence. (Regardless of the outcome, the proceeds will go to a charity of the winner’s choice.)

The wager was made in 2002. Much has happened since then. It is recognized that human-level intelligence equivalence, often called Artificial General Intelligence (AGI), is quite capable of obedience. That could be controlled but for how long? Unlike humans, general intelligence will pass to super intelligence and do so quickly, perhaps in a matter of months. Progress is exponential, and it’s very seductive (see the Rice on the Chessboard Tale), at first appearing to be linear then proceeding so fast it surpasses human comprehension. What happens when an Artificial Super Intelligence keeps getting smarter at an exponential pace? According to Kurzweil, the ASI will have reached what he calls the Singularity, defined as

..a period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. (P.7)

It’s not just machines what will undergo transformation — humans will also. Or at least they will have the option to change. Scientists working with AI have long stressed the Precautionary Principle which means exercising care “with weakly understood causes of potential catastrophic or irreversible events.” But how do you exercise caution with technology that’s smarter than you, and that gets smarter with every passing second?

Governments seeking to control AI and its progenies for their own schemes might as well try to capture a lightning bolt in a bottle.


Tags: ,

Leave a Reply

Your email address will not be published. Required fields are marked *