Do We Need a Speedometer for artificial intelligence?

Microsoft said last week that it had achieved a new record for the accuracy of software that transcribes speech. Its system missed just one in 20 words on a standard collection of phone call recordings—matching humans given the same challenge.

The result is the latest in a string of recent findings that some view as proof that advances in artificial intelligence are accelerating, threatening to upend the economy. Some software has proved itself better than people at recognizing objects such as cars or cats in images, and Google’s AlphaGo software has overpowered multiple Go champions—a feat that until recently was considered a decade or more away. Companies are eager to build on this progress; mentions of AI on corporate earnings calls have grown more or less exponentially.

Now some AI observers are trying to develop a more exact picture of how, and how fast, the technology is advancing. By measuring progress—or the lack of it—in different areas, they hope to pierce the fog of hype about AI. The projects aim to give researchers and policymakers a more clear-eyed view of what parts of the field are advancing most quickly and what responses that may require.

Image recognition software out-performed humans on the standard ImageNet test in 2016.

EFF

“This is something that needs to be done in part because there’s so much craziness out there about where AI is going,” says Ray Perrault, a researcher at nonprofit lab SRI International. He’s one of the leaders of a project called the AI Index, which aims to release a detailed snapshot of the state and rate of progress in the field by the end of the year. The project is backed by the One Hundred Year Study on Artificial Intelligence, established at Stanford in 2015 to examine the effects of AI on society.

Claims of AI advances are everywhere these days, coming even from the marketers of fast food and toothbrushes. Even boasts from solid research teams can be difficult to assess. Microsoft first announced it had matched humans at speech recognition last October. But researchers at IBM and crowdsourcing company Appen subsequently showed humans were more accurate than Microsoft had claimed. The software giant had to cut its error rate a further 12 percent to make its latest claim of human parity.

The growing power of chess-playing software over the past three decades.

EFF

The Electronic Frontier Foundation, which campaigns to protect civil liberties from digital threats, has started its own effort to…

Read the full article from the Source…

Leave a Reply

Your email address will not be published. Required fields are marked *