A group of the world’s most respected AI experts, including two ACM Turing award winners (basically the Nobel for tech), warn that governments must step in to control and possibly stop AI developments.
The group released a policy document this week that the AI tools currently being built will be incredibly strong and that their unrestrained use and development could lead to tragedy.
Their document echoes an earlier letter from this year, which warned that humanity was “witnessing a race to the bottom that must be stopped”.
Also read: Obama Admits ‘Naivety’ About People Becoming Coders to Escape Poverty
Written by Max Tegmark, a professor of physics and AI researchers at MIT, the letter was signed by thousands of tech industry leaders, including the Apple co-founder Steve Wozniak and Elon Musk, a long-time critic of AI.
Now, the policy document that’s again drawing the alarm asks for more government oversight and the power of government to step in and halt development for particularly worrisome and untested technologies.
From a report in The Guardian:
“In a policy document published this week, 23 AI experts, including two modern “godfathers” of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. “There are companies planning to train models with 100x more computation than today’s state of the art, within 18 months,” she said. “No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.”
The paper, whose authors include Geoffrey Hinton and Yoshua Bengio — two winners of the ACM Turing award, the “Nobel prize for computing” — argues that powerful models must be licensed by governments and, if necessary, have their development halted. “For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready.” The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation.”
You can read their letter here.
Also read: Are We on the Cusp of an AI-Powered Renewable Energy Revolution?
Follow TechTheLead on Google News to get the news first.