Microsoft has unveiled a new open-source “matrix” that hopes to identify all the existing attacks that threaten the security of machine learning applications.

By

Daphne Leprince-Ringuet

| October 23, 2020 — 15:41 GMT (08:41 PDT)

| Topic: Artificial Intelligence

Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cyber-security’s next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats. 

The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats.

What is AI? Everything you need to know about Artificial Intelligence

Read More

The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE’s teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.

“If you just try to imagine the universe of potential challenges and vulnerabilities, you’ll never get anywhere,” said Mikel Rodriguez, who oversees MITRE’s decision science research programs. “Instead, with this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” 

With AI systems increasingly underpinning our everyday lives, the tool seems timely. From finance to healthcare, through defense and critical infrastructure, the applications of machine learning have multiplied in the past few years. But MITRE’s researchers argue that while eagerly accelerating the development of new algorithms, organizations have often failed to scrutinize the security of their systems.

Surveys increasingly point to the lack of understanding within industry of the importance of securing AI systems against adversarial threats. Companies like Google, Amazon, Microsoft and Tesla, in fact, have all seen their machine learning systems tricked in one way or the other in the past three years.

“Whether it’s just a failure of the system or because a malicious actor is causing it to behave in unexpected ways, AI can cause significant disruptions,” Charles Clancy, MITRE’s senior vice president, said. “Some fear that the systems we depend on, like critical infrastructure, will be under attack, hopelessly hobbled because of AI gone bad.”

Algorithms are prone to mistakes, therefore, and especially so when they are influenced by the malicious interventions of bad actors. In a separate study, a team of researchers recently ranked the potential criminal applications that AI will have in the next 15 years; among the list of highly-worrying prospects, was the opportunity for attack that AI systems constitute when algorithms are used in key applications like public safety or financial transactions.

As MITRE and Microsoft’s researchers note, attacks can come in many different shapes and forms. Threats go all the way from a sticker placed on a sign to make an automated system in a self-driving car make the wrong decision, to more sophisticated cybersecurity methods going by specialized names, like evasion, data poisoning, trojaning or backdooring.  

Centralizing the various aspects of all the methods that are known to effectively threaten machine learning applications in a single matrix, therefore, could go a long way in helping security experts prevent future attacks on their systems. 

“By giving a common language or taxonomy of the different vulnerabilities, the threat matrix will spur better communication and collaboration across organizations,” said Rodriguez.

MITRE’s researchers are hoping to gather more information from ethical hackers, thanks to a well-established cybersecurity method known as red teaming. The idea is to have teams of benevolent security experts finding ways to crack vulnerabilities ahead of bad actors, to feed into the existing database of attacks and expand overall knowledge of the possible threats.

Microsoft and MITRE both have their own Red Teams, and they have already demonstrated some of the attacks that were used to feed into the matrix as it is. They include, for example, evasion attacks on machine-learning models, which can modify the input data to induce targeted misclassification. 

By

Daphne Leprince-Ringuet

| October 23, 2020 — 15:41 GMT (08:41 PDT)

| Topic: Artificial Intelligence

Technology is about to destroy millions of jobs. But, if we’re lucky, it will create even more

Time is running out ahead of new data rules. But many companies will struggle to be ready

By registering, you agree to the Terms of Use and acknowledge the data practices outlined in the Privacy Policy.

You will also receive a complimentary subscription to the ZDNet’s Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe from these newsletters at any time.

You agree to receive updates, alerts, and promotions from the CBS family of companies – including ZDNet’s Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe at any time.

By signing up, you agree to receive the selected newsletter(s) which you may unsubscribe from at any time. You also agree to the Terms of Use and acknowledge the data collection and usage practices outlined in our Privacy Policy.

The two companies will combine that artificial intelligence in Honeywell Forge performance management software with Microsoft Dynamics Field Service.

In collaboration with Pfizer, Big Blue has developed an AI model using speech samples provided by the Framingham Heart Study.

Beyond Python and JavaScript, Kite’s tool now supports 13 languages and three more are due in weeks.

Technology is about to destroy millions of jobs. But, if we’re lucky, it will create even more

The double challenge posed by the global recession and automation could cause huge shifts in the world of work.

Two in five companies see lack of technical expertise as a roadblock to AI. It couldn’t come at a worse time.

Chip industry is going to need a lot more software to catch Nvidia’s lead in AI

© 2020 CBS Interactive. All rights reserved.
Privacy Policy |
Cookies |
Ad Choice |
Advertise |
Terms of Use |
Mobile User Agreement

Source: https://www.zdnet.com/article/ai-security-this-project-aims-to-spot-attacks-against-critical-systems-before-they-happen/

Machine learning, Microsoft Corporation, System, Computer security, Artificial intelligence, Cyberattack, Threat, IBM

World news – CA – AI security: This project aims to spot attacks against critical systems before they happen | ZDNet

En s’appuyant sur ses expertises dans les domaines du digital, des technologies et des process , CSS Engineering vous accompagne dans vos chantiers de transformation les plus ambitieux et vous aide à faire émerger de nouvelles idées, de nouvelles offres, de nouveaux modes de collaboration, de nouvelles manières de produire et de vendre.

CSS Engineering s’implique dans les projets de chaque client comme si c’était les siens. Nous croyons qu’une société de conseil devrait être plus que d’un conseiller. Nous nous mettons à la place de nos clients, pour aligner nos incitations à leurs objectifs, et collaborer pour débloquer le plein potentiel de leur entreprise. Cela établit des relations profondes et agréables.

Nos services:

  1. Création des sites web professionnels
  2. Hébergement web haute performance et illimité
  3. Vente et installation des caméras de vidéo surveillance
  4. Vente et installation des système de sécurité et d’alarme
  5. E-Marketing

Toutes nos réalisations ici https://www.css-engineering.com/en/works/

LEAVE A REPLY

Please enter your comment!
Please enter your name here