[ti:'Godfather of AI' Quits Google, Warns of Serious Technology Dangers] [al:Science & Technology] [ar:VOA] [dt:2023-05-04] [by:www.voase.cn] [00:00.00]A man widely considered the "godfather" of artificial intelligence (AI) says he quit his job at Google to speak freely about the dangers of the technology. [00:16.26]Geoffrey Hinton recently spoke to The New York Times and other press about his experiences at Google, and his wider concerns about AI development. [00:30.61]He told the Times he left the search engine company last month after leading the Google Research team in Toronto, Canada for 10 years. [00:43.36]During his career, the 75-year-old Hinton has pioneered work on deep learning and neural networks. [00:53.80]A neural network is a computer processing system built to act like the human brain. [01:01.86]Hinton's work helped form the base for much of the AI technology in use today. [01:09.94]In 2019, Hinton and three other computer scientists received the Turing Award for their separate work related to neural networks. [01:22.55]The award has been described as the "Nobel Prize of Computing." [01:28.39]The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI. [01:38.92]In recent months, a number of new AI technologies have been introduced. [01:45.48]Microsoft-backed American startup OpenAI launched its latest AI model, ChatGPT-4, in March. [01:57.27]Other technology companies have invested in computing tools, including Google's Bard system. Such tools are known as "chatbots." [02:09.61]The recently released AI tools have demonstrated the ability to perform human-like discussions and create complex documents based on short, written commands. [02:24.29]Speaking to the BBC, Hinton called the dangers of such tools "quite scary." [02:32.06]He added, "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon will be." [02:42.53]He said he believes AI systems are getting smarter because of the massive amounts of data they take in and examine. [02:52.94]Hinton also told MIT Technology Review he fears some "bad" individuals might use AI in ways that could seriously harm society. [03:06.68]Such effects could include AI systems interfering in elections or inciting violence. [03:15.09]He told the Times he thinks AI systems could create a world in which people will "not be able to know what is true anymore." [03:26.67]Hinton said he retired from Google so that he could speak openly about the possible risks of the technology as someone who no longer works for the company. [03:39.43]"I want to talk about AI safety issues without having to worry about how it interacts with Google's business," he told MIT Technology Review. [03:53.62]Since announcing his departure, Hinton has said he thinks Google had "acted very responsibly" in its own AI development. [04:05.54]In March, hundreds of AI experts and industry leaders released an open letter expressing deep concerns about current AI development efforts. [04:18.34]The letter identified a number of harms that could result from such development. [04:26.03]These included increases in propaganda and misinformation, [04:31.68]the loss of millions of jobs to machines and the possibility that AI could one day take control of our civilization. [04:42.14]The letter urges a halt to development of some kinds of AI. [04:48.67]Turing Prize winner Bengio, Apple co-founder Steve Wozniak and Elon Musk, leader of SpaceX, Tesla and Twitter signed the letter. [05:02.07]The organization that released the letter, Future of Life, is financially supported by the Musk Foundation. [05:11.64]Musk has long warned of the possible dangers of AI. [05:17.15]Last month, he told Fox News he planned to create his own version of some AI tools released in recent months. [05:26.85]Musk said his new AI tool would be called TruthGPT. [05:33.44]He described it as "truth-seeking AI" that will seek to understand humanity so it is less likely to destroy it. [05:42.97]Alondra Nelson is the former head of the White House Office of Science and Technology Policy, which seeks to create guidelines for the responsible use of AI tools. [05:59.20]She told The Associated Press, "For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn't only include AI experts and developers." [06:18.68]Nelson added that she hopes the recent attention on AI can create "a new conversation about what we want a democratic future and a non-exploitative future with technology to look like." [06:37.45]I'm Bryan Lynn,