Facebook’s artificial intelligence researchers have a plan to make algorithms smarter by exposing them to human cunning. They want your help to supply the trickery.

Thursday, Facebook’s AI lab launched a project called Dynabench that creates a kind of gladiatorial arena in which humans try to trip up AI systems. Challenges include crafting sentences that cause a sentiment-scoring system to misfire, reading a comment as negative when it is actually positive, for example. Another involves tricking a hate speech filter—a potential draw for teens and trolls. The project initially focuses on text-processing software, although it could later be extended to other areas such as speech, images, or interactive games.

Subjecting AI to provocations from people is intended to give a truer measure of the intelligence (and stupidity) of artificial intelligence, and provide data that can improve it. Researchers typically compare algorithms by scoring how accurately they label images or answer multiple choice questions on standard collections of data, known as benchmarks.

Facebook researcher Douwe Kiela says those tests don’t really measure what he and others in the field care about. “The thing we’re really interested in is how often it makes mistakes when it interacts with a person,” he says. “With current benchmarks, it looks like we’re amazing at doing language in AI and that’s very misleading because we still have a lot to do.”

The researchers hope analyzing cases where AI was snookered by people will make algorithms less dupable.

Douwe hopes AI experts and ordinary netizens alike will find it fun to log on to spar with AI and earn virtual badges, but the platform will also let researchers pay for contributions through Amazon’s crowdsourcing service Mechanical Turk. AI labs at Stanford, University of North Carolina, and University College London will all maintain artificial intelligence tests on the Dynabench platform.

“The thing we’re really interested in is how often it makes mistakes when it interacts with a person.”

Facebook’s project comes as more AI researchers, including the social network’s VP of artificial intelligence, say the field needs to broaden its horizons if computers are to become capable of handling complex, real world situations.

In the last eight years, breakthroughs in an AI technique called deep learning have brought consumers speech recognition that mostly works, phones that auto-sort dog photos, and some hilarious Snapchat filters. Algorithms can unspool eerily limpid text.

Yet deep learning software stumbles in situations outside its narrow training. The best text-processing algorithms can still be tripped up by the nuances of language, such as sarcasm, or how cultural context can shift the meaning of words. Those are major challenges for Facebook’s hate speech detectors. Text generators often spew nonsensical sentences adrift from reality.

Those limitations can be hard to see if you look at the standard benchmarks used in AI research. Some tests of AI reading comprehension have had to be redesigned and made more challenging in recent years because algorithms figured out how to score so highly, even surpassing humans.

Yejin Choi, a professor at University of Washington and research manager at the Allen Institute for AI, says such results are deceptive. The statistical might of machine learning algorithms can discover tiny correlations in test datasets undetectable by people that reveal correct answers without requiring a human’s wider understanding of the world. “We are seeing a Clever Hans situation,” she says, referring to the horse who faked numeracy by reading human body language.

More AI researchers are now seeking alternate ways to measure and spur progress. Choi has tested some of her own, including one that scores text-generation algorithms by how well their responses to Reddit posts rank against those from people. Other researchers have experimented with having humans try to trick text algorithms, and shown how examples collected this way can make AI systems improve.

Algorithms tend to look less smart when pitted against those more challenging tests and Choi expects to see a similar pattern on Facebook’s new Dynabench platform. Projects that strip away AI emperors’ clothes could jolt researchers into exploring fresher ideas that lead to breakthroughs. “It will challenge the community to think harder about how learning should really take place with AI,” Choi says. “We need to be more creative.”

WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is changing every aspect of our lives—from culture to business, science to design. The breakthroughs and innovations that we uncover lead to new ways of thinking, new connections, and new industries.

© 2020 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Your California Privacy Rights. Wired may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices

Source: https://www.wired.com/story/try-sneak-bad-words-ai-filters-research/

Artificial intelligence, Research, Facebook, Machine learning

World news – CA – Go Ahead, Try to Sneak Bad Words Past AI Filters—for Research

En s’appuyant sur ses expertises dans les domaines du digital, des technologies et des process , CSS Engineering vous accompagne dans vos chantiers de transformation les plus ambitieux et vous aide à faire émerger de nouvelles idées, de nouvelles offres, de nouveaux modes de collaboration, de nouvelles manières de produire et de vendre.

CSS Engineering s’implique dans les projets de chaque client comme si c’était les siens. Nous croyons qu’une société de conseil devrait être plus que d’un conseiller. Nous nous mettons à la place de nos clients, pour aligner nos incitations à leurs objectifs, et collaborer pour débloquer le plein potentiel de leur entreprise. Cela établit des relations profondes et agréables.

Nos services:

  1. Création des sites web professionnels
  2. Hébergement web haute performance et illimité
  3. Vente et installation des caméras de vidéo surveillance
  4. Vente et installation des système de sécurité et d’alarme
  5. E-Marketing

Toutes nos réalisations ici https://www.css-engineering.com/en/works/

LEAVE A REPLY

Please enter your comment!
Please enter your name here