Facebookâs artificial intelligence researchers have a plan to make algorithms smarter by exposing them to human cunning. They want your help to supply the trickery.
Thursday, Facebookâs AI lab launched a project called Dynabench that creates a kind of gladiatorial arena in which humans try to trip up AI systems. Challenges include crafting sentences that cause a sentiment-scoring system to misfire, reading a comment as negative when it is actually positive, for example. Another involves tricking a hate speech filterâa potential draw for teens and trolls. The project initially focuses on text-processing software, although it could later be extended to other areas such as speech, images, or interactive games.
Subjecting AI to provocations from people is intended to give a truer measure of the intelligence (and stupidity) of artificial intelligence, and provide data that can improve it. Researchers typically compare algorithms by scoring how accurately they label images or answer multiple choice questions on standard collections of data, known as benchmarks.
Facebook researcher Douwe Kiela says those tests donât really measure what he and others in the field care about. âThe thing weâre really interested in is how often it makes mistakes when it interacts with a person,â???? he says. âWith current benchmarks, it looks like weâre amazing at doing language in AI and thatâs very misleading because we still have a lot to do.â
The researchers hope analyzing cases where AI was snookered by people will make algorithms less dupable.
Douwe hopes AI experts and ordinary netizens alike will find it fun to log on to spar with AI and earn virtual badges, but the platform will also let researchers pay for contributions through Amazonâs crowdsourcing service Mechanical Turk. AI labs at Stanford, University of North Carolina, and University College London will all maintain artificial intelligence tests on the Dynabench platform.
âThe thing weâre really interested in is how often it makes mistakes when it interacts with a person.â
Facebookâs project comes as more AI researchers, including the social networkâs VP of artificial intelligence, say the field needs to broaden its horizons if computers are to become capable of handling complex, real world situations.
In the last eight years, breakthroughs in an AI technique called deep learning have brought consumers speech recognition that mostly works, phones that auto-sort dog photos, and some hilarious Snapchat filters. Algorithms can unspool eerily limpid text.
Yet deep learning software stumbles in situations outside its narrow training. The best text-processing algorithms can still be tripped up by the nuances of language, such as sarcasm, or how cultural context can shift the meaning of words. Those are major challenges for Facebookâs hate speech detectors. Text generators often spew nonsensical sentences adrift from reality.
Those limitations can be hard to see if you look at the standard benchmarks used in AI research. Some tests of AI reading comprehension have had to be redesigned and made more challenging in recent years because algorithms figured out how to score so highly, even surpassing humans.
Yejin Choi, a professor at University of Washington and research manager at the Allen Institute for AI, says such results are deceptive. The statistical might of machine learning algorithms can discover tiny correlations in test datasets undetectable by people that reveal correct answers without requiring a humanâs wider understanding of the world. âWe are seeing a Clever Hans situation,â she says, referring to the horse who faked numeracy by reading human body language.
More AI researchers are now seeking alternate ways to measure and spur progress. Choi has tested some of her own, including one that scores text-generation algorithms by how well their responses to Reddit posts rank against those from people. Other researchers have experimented with having humans try to trick text algorithms, and shown how examples collected this way can make AI systems improve.
Algorithms tend to look less smart when pitted against those more challenging tests and Choi expects to see a similar pattern on Facebookâs new Dynabench platform. Projects that strip away AI emperorsâ clothes could jolt researchers into exploring fresher ideas that lead to breakthroughs. âIt will challenge the community to think harder about how learning should really take place with AI,â Choi says. âWe need to be more creative.â
WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is changing every aspect of our livesâfrom culture to business, science to design. The breakthroughs and innovations that we uncover lead to new ways of thinking, new connections, and new industries.
© 2020 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Your California Privacy Rights. Wired may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Source: https://www.wired.com/story/try-sneak-bad-words-ai-filters-research/
Artificial intelligence, Research, Facebook, Machine learning
World news – THAT – Go Ahead, Try to Sneak Bad Words Past AI Filtersâ????for Research