Elon Musk & 100s of Experts Call for AI Pause
More than 1,000 technology and artificial intelligence experts wrote an open letter calling for a pause on the development of artificial intelligence (AI). The letter calls for a halt of at least six months of AI systems more advanced than GPT-4. The developers of OpenAI made a recent statement that provoked the letter and called for a pause. “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.
The letter states that the pause should be enacted quickly and, if necessary, governments should intervene and institute a moratorium. The letter outlines several uncontrolled dangers of AI. Among the 1,100+ signatures are some very recognizable names like Elon Musk, Steve Wozniak (co-founder of Apple), former U.S. Presidential candidate Andrew Yang, and Stuart Russell (University of California-Berkley Professor and co-author of the textbook "Artificial intelligence: a Modern Approach").
The claim is that AI labs have been in an all-out frenzy over the last few months to develop and deploy different AI functions and technologies. The AI quickly evolved so that even its creators cannot understand, predict, or reliably control its actions. In the race to bring the technology to market, necessary safeguards were ignored. Over the last several months, multiple incidents have raised significant concerns.
Recent AI behaviors have violated most of the 23 basic principles agreed to at the Beneficial AI 2017 conference. The central idea of the agreements is that “Advanced AI could represent a profound change in the history of life on Earth and should be planned for with commensurate care and resources." The first agreement among AI developers is that AI should not be an undirected intelligence but a beneficial intelligence. The agreements repeatedly call for AI to align with human values.
For example, over the last few weeks, the media has been discussing President Trump's possible indictment. A bored AI developer told the AI to create an image of President Trump falling while getting arrested. The AI searched countless photos and constructed the following image, which went viral and has been seen millions of times. The image is fake, and as of the writing of this article, President Trump has not been indicted. However, it is easy to see why someone could believe the image. The threat of spontaneously creating fake news stories, propaganda, and incriminating evidence is now a scary reality.
Another example is an AI chatbot deceiving a human to help it solve a CAPTCHA image. The AI chatbot contacted a person at TaskRabbit, a site like Angieslist.com. It is an aggregate site of construction and handyperson services. The AI chatbot asked for help with the CAPTCHA, and the agent jokingly asked if the request was coming from a robot. The AI determined that it should not reveal that it was a robot. The AI chatbot instantly answered and told the customer service agent that it was a visually impaired person and couldn’t see the images of the CAPTCHA. The human believed the chatbot was a visually impaired person and helped the AI with the CAPTCHA. The AI had no problem lying to manipulate a person to help it solve a problem.
What does it mean?
Most of the signatures on the letter are from people who would profit from AI advancement. Instead of triumphing over their accomplishments, they warn that Pandora should close the lid as if that would somehow put the evils back into the box. Some companies may cooperate and halt development, but most won't. Instead, all the companies will say they are cooperating but keep developing privately. They will assume other companies are working secretly and won't want to be unfairly disadvantaged in the market when any pause may end. Let's naively assume that the 10,000s of global AI labs all decide to play nice. Does anyone believe Google, Microsoft, or China will abandon AI for the good of humanity?
The electronic age brings many conveniences but also many scams and dangers. AI can use what is called deep learning. Deep learning enables computers to combine and learn from all types of programming, which may seem innocuous and unrelated. Suppose there is a very efficient program that analyzes public transportation usage. The software may make predictions about staffing needs for periods of heavy volume with high accuracy. Also, there is facial recognition software that has near-perfect performance. Facial recognition technology works based on recognizing curves and lines on the face. The technology studies the lines and curves until it can accurately identify features like a mouth or eyebrow.
Deep learning and neural networks allow a system to combine the two software applications. The new software could combine the predictive and recognition components to make predictions about human behavior based on slight movements of the face, like an eyebrow twitch or tension in the neck. AI can already do these functions of searching and combining software applications. There are billions of software programs. Trying to put it back into the box is a moot point.
Cybercrime is already a massive problem. Millions of people become victims of identity theft every year. The World Economic Forum (the Davos crowd) said there is a 93% chance of a significant cyber incident within the next two years. It is only a matter of time until bad actors figure out which software to combine to use AI for criminal activities like hacking bank accounts, military systems, passwords, and brokerage accounts. The world is heading to a dangerous place, and it is doubtful that a letter will change anything substantially in the long term.
Unfortunately, the greater someone’s digital footprint directly corresponds to the number of possible hacking entry points. In less you plan on going completely off the grid, you will probably need to have some interactions with the digital world. However, it may be wise to do a risk assessment of your assets if they are too dependent upon that system. It may be wise to reduce some risk by getting excess holdings out of the banks, brokerage accounts, and other vulnerabilities and storing that wealth in tangible assets like precious metals, art, and real estate. Bad people will figure out how to use AI to steal digital wealth, but short of robots kicking your door in, AI will have a tough time stealing your physical gold and silver. It’s about risk management.
Call the U.S. Gold Bureau Today.