Screwing Up Artificial Intelligence Could Be Disastrous, Experts Say
From smartphone apps like Siri to features like facial recognition of photos, artificial intelligence (AI) is becoming a part of everyday life. But humanity should take more care in developing AI than with other technologies, experts say.
Science and tech heavyweights Elon Musk, Bill Gates and Stephen Hawking have warned that intelligent machines could be one of humanity’s biggest existential threats. But throughout history, human inventions, such as fire, have also posed dangers. Why should people treat AI any differently?
“With fire, it was OK that we screwed up a bunch of times,” Max Tegmark, a physicist at the Massachusetts Institute of Technology, said April 10 on the radio show Science Friday. But in developing artificial intelligence, as with nuclear weapons, “we really want to get it right the first time, because it might be the only chance we have,” he said.
On the one hand, AI has the potential to achieve enormous good in society, experts say. “This technology could save thousands of lives,” whether by preventing car accidents or avoiding errors in medicine, Eric Horvitz, managing director of Microsoft Research lab in Seattle, said on the show. The downside is the possibility of creating a computer program capable of continually improving itself that “we might lose control of,” he added.
For a long time, society has believed that things that are smarter must be better, Stuart Russell, a computer scientist at the University of California, Berkeley, said on the show. But just like the Greek myth of King Midas, who transformed everything he touched into gold, ever-smarter machines may not turn out to be what society wished for. In fact, the goal of making machines smarter may not be aligned with the goals of the human race, Russell said.
For example, nuclear power gave us access to the almost unlimited energy stored in an atom, but “unfortunately, the first thing we did was create an atom bomb,” Russell said. Today, “99 percent of fusion research is containment,” he said, and “AI is going to go the same way.”
Tegmark called the development of AI “a race between the growing power of technology and humanity’s growing wisdom” in handling that technology. Rather than try to slow down the former, humanity should invest more in the latter, he said.
At a conference in Puerto Rico in January organized by the nonprofit Future of Life Institute (which Tegmark co-founded), AI leaders from academia and industry (including Elon Musk) agreed that it’s time to redefine the goal of making machines as smart and as fast as possible. The goal should now be to make machines beneficial for society. Musk donated $10 million to the institute in order to further that goal.
After the January conference, hundreds of scientists, including Musk, signed an open letter describing the potential benefits of AI, yet warned of its pitfalls.
Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook & Google+. Original article on Live Science
- How to Monetize Your Facebook Page: A Comprehensive Guide - July 22, 2023
- Avoiding Plagiarism: Your Ticket to Originality and Academic Success! - July 21, 2023
- Chat-GPT: Your Friendly AI Pal for Awesome Conversations! - July 20, 2023
No Comments Yet