{"id":928,"date":"2020-11-06T11:00:33","date_gmt":"2020-11-06T11:00:33","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1326850"},"modified":"2020-11-06T11:00:33","modified_gmt":"2020-11-06T11:00:33","slug":"how-to-protect-your-ai-systems-against-adversarial-machine-learning","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=928","title":{"rendered":"How to protect your AI systems against adversarial machine learning"},"content":{"rendered":"\n<p>With machine learning becoming increasingly popular, one thing that has been worrying experts is the security threats the technology will entail. We are still exploring the possibilities: The breakdown of autonomous driving systems? Inconspicuous theft of sensitive data from deep neural networks? Failure of<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/02\/15\/what-is-deep-learning-neural-networks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">deep learning<\/a>\u2013based biometric authentication? Subtle bypass of content moderation algorithms?<\/p>\n<p>Meanwhile, machine learning algorithms have already found their way into critical fields such as finance, health care, and transportation, where security failures can have severe repercussion.<\/p>\n<p>Parallel to the increased adoption of machine learning algorithms in different domains, there has been growing interest in<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/07\/15\/machine-learning-adversarial-examples\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">adversarial machine learning<\/a>, the field of research that explores ways learning algorithms can be compromised.<\/p>\n<p>And now, we finally have a framework to detect and respond to adversarial attacks against machine learning systems. Called the<span>&nbsp;<\/span><a href=\"https:\/\/github.com\/mitre\/advmlthreatmatrix\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Adversarial ML Threat Matrix<\/a>, the framework is the result of a joint effort between AI researchers at 13 organizations, including Microsoft, IBM, Nvidia, and MITRE.<\/p>\n<p>While still in early stages, the ML Threat Matrix provides a consolidated view of how malicious actors can take advantage of weaknesses in machine learning algorithms to target organizations that use them. And its key message is that the threat of adversarial machine learning is real and organizations should act now to secure their AI systems.<\/p>\n<h2>Applying ATT&amp;CK to machine learning<\/h2>\n<p>The Adversarial ML Threat Matrix is presented in the style of ATT&amp;CK, a tried-and-tested framework developed by MITRE to deal with cyber-threats in enterprise networks. ATT&amp;CK provides a table that summarizes different adversarial tactics and the types of techniques that threat actors perform in each area.<\/p>\n<p>Since its inception, ATT&amp;CK has become a popular guide for cybersecurity experts and threat analysts to find weaknesses and speculate on possible attacks. The ATT&amp;CK format of the Adversarial ML Threat Matrix makes it easier for security analysts to understand the threats of machine learning systems. It is also an accessible document for machine learning engineers who might not be deeply acquainted with cybersecurity operations.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-8624 jetpack-lazy-image jetpack-lazy-image--handled\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=696%2C411&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=1024%2C605&amp;ssl=1 1024w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=300%2C177&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=768%2C454&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=1536%2C908&amp;ssl=1 1536w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=696%2C411&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=1068%2C631&amp;ssl=1 1068w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?resize=710%2C420&amp;ssl=1 710w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?w=1886&amp;ssl=1 1886w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?w=1392&amp;ssl=1 1392w\" alt=\"Adversarial ML Threat Matrix\" width=\"696\" height=\"411\" data-attachment-id=\"8624\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/advmlthreatmatrix\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?fit=1886%2C1115&amp;ssl=1\" data-orig-size=\"1886,1115\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Adversarial ML Threat Matrix\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?fit=300%2C177&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/AdvMLThreatMatrix.jpg?fit=696%2C411&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\"><\/figure>\n<p>\u201cMany industries are undergoing digital transformation and will likely adopt machine learning technology as part of service\/product offerings, including making high-stakes decisions,\u201d Pin-Yu Chen, AI researcher at IBM, told<span>&nbsp;<\/span><em>TechTalks<span>&nbsp;<\/span><\/em>in written comments. \u201cThe notion of \u2018system\u2019 has evolved and become more complicated with the adoption of machine learning and deep learning.\u201d<\/p>\n<p>For instance, Chen says, an automated financial loan application recommendation can change from a transparent&nbsp;<a href=\"https:\/\/bdtechtalks.com\/2019\/11\/18\/what-is-symbolic-artificial-intelligence\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">rule-based system<\/a><span>&nbsp;<\/span>to a black-box<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/08\/05\/what-is-artificial-neural-network-ann\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">neural network-oriented system<\/a>, which could have considerable implications on how the system can be attacked and secured.<\/p>\n<p>\u201cThe adversarial threat matrix analysis (i.e., the study) bridges the gap by offering a holistic view of security in emerging ML-based systems, as well as illustrating their causes from traditional means and new risks induce by ML,\u201d Chen says.<\/p>\n<p>The Adversarial ML Threat Matrix combines known and documented tactics and techniques used in attacking digital infrastructure with methods that are unique to<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2017\/08\/28\/artificial-intelligence-machine-learning-deep-learning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">machine learning<\/a><span>&nbsp;<\/span>systems. Like the original ATT&amp;CK table, each column represents one tactic (or area of activity) such as reconnaissance or model evasion, and each cell represents a specific technique.<\/p>\n<p>For instance, to attack a machine learning system, a malicious actor must first gather information about the underlying model (reconnaissance column). This can be done through the gathering of open-source information (arXiv papers, GitHub repositories, press releases, etc.) or through experimentation with the application programming interface that exposes the model.<\/p>\n<h2>The complexity of machine learning security<\/h2>\n<figure id=\"attachment_8637\" class=\"wp-caption aligncenter\" aria-describedby=\"caption-attachment-8637\"><a href=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-8637 jetpack-lazy-image jetpack-lazy-image--handled lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=696%2C392&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"machine learning adversarial examples neural network\" width=\"696\" height=\"392\" data-attachment-id=\"8637\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/machine-learning-adversarial-examples-neural-network\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?fit=1920%2C1080&amp;ssl=1\" data-orig-size=\"1920,1080\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"machine learning adversarial examples neural network\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?fit=300%2C169&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?fit=696%2C392&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=1024%2C576&amp;ssl=1 1024w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=300%2C169&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=768%2C432&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=1536%2C864&amp;ssl=1 1536w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=696%2C392&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=1068%2C601&amp;ssl=1 1068w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=747%2C420&amp;ssl=1 747w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?w=1920&amp;ssl=1 1920w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?w=1392&amp;ssl=1 1392w\"><\/a><figcaption id=\"caption-attachment-8637\" class=\"wp-caption-text\">Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.<\/figcaption><\/figure>\n<p>Each new type of technology comes with its unique security and privacy implications. For instance, the advent of web applications with database backends introduced the concept SQL injection. Browser scripting languages such as JavaScript ushered in cross-site scripting attacks. The<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2017\/09\/27\/what-is-iot-internet-of-things\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">internet of things<\/a><span>&nbsp;<\/span>(IoT) introduced new ways to create<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2016\/01\/28\/all-you-need-to-know-about-botnets\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">botnets<\/a><span>&nbsp;<\/span>and conduct distributed denial of service (DDoS) attacks. Smartphones and mobile apps create new attack vectors for malicious actors and spying agencies.<\/p>\n<p>The security landscape has evolved and continues to develop to address each of these threats. We have anti-malware software, web application firewalls, intrusion detection and prevention systems, DDoS protection solutions, and many more tools to fend off these threats.<\/p>\n<p>For instance, security tools can scan binary executables for the digital fingerprints of malicious payloads, and static analysis can find vulnerabilities in software code. Many platforms such as GitHub and Google App Store already have integrated many of these tools and do a good job at finding security holes in the software they house.<\/p>\n<p>But in adversarial attacks, malicious behavior and vulnerabilities are deeply embedded in the thousands and millions of parameters of deep neural networks, which is both hard to find and beyond the capabilities of current security tools.<\/p>\n<p>\u201cTraditional software security usually does not involve the machine learning component because it\u2019s&nbsp;a new piece in the growing system,\u201d Chen says, adding that&nbsp;adopting machine learning into the security landscape gives new insights and risk assessment.<\/p>\n<p>The Adversarial ML Threat Matrix comes with a<span>&nbsp;<\/span><a href=\"https:\/\/github.com\/mitre\/advmlthreatmatrix\/blob\/master\/pages\/case-studies-page.md#clearviewai-misconfiguration\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">set of case studies<\/a><span>&nbsp;<\/span>of attacks that involve traditional security vulnerabilities, adversarial machine learning, and combinations of both. What\u2019s important is that contrary to the popular belief that adversarial attacks are limited to lab environments, the case studies show that production machine learning system can and have been compromised with adversarial attacks.<\/p>\n<p>For instance, in one case study, the security team at Microsoft Azure used open-source data to gather information about a target machine learning model. They then used a valid account in the server to obtain the machine learning model and its training data. They used this information to find adversarial vulnerabilities in the model and develop attacks against the API that exposed its functionality to the public.<\/p>\n<figure class=\"wp-block-image size-large\" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-8626 lazy\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=696%2C82&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"combined ML adversarial attack\" width=\"696\" height=\"82\" data-attachment-id=\"8626\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/combined-ml-adversarial-attack\/\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?fit=1781%2C209&amp;ssl=1\" data-orig-size=\"1781,209\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"combined ML adversarial attack\" data-image-description data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?fit=300%2C35&amp;ssl=1\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?fit=696%2C82&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=1024%2C120&amp;ssl=1 1024w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=300%2C35&amp;ssl=1 300w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=768%2C90&amp;ssl=1 768w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=1536%2C180&amp;ssl=1 1536w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=696%2C82&amp;ssl=1 696w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?resize=1068%2C125&amp;ssl=1 1068w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?w=1781&amp;ssl=1 1781w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/combined-ML-adversarial-attack.png?w=1392&amp;ssl=1 1392w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/11\/06\/how-to-protect-your-ai-systems-against-adversarial-machine-learning-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F11%2F06%2Fhow-to-protect-your-ai-systems-against-adversarial-machine-learning-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Attackers can leverage a combination of machine learning\u2013specific techniques and traditional attack vectors to compromise AI systems.\" data-title=\"Share Attackers can leverage a combination of machine learning\u2013specific techniques and traditional attack vectors to compromise AI systems. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Attackers can leverage a combination of machine learning\u2013specific techniques and traditional attack vectors to compromise AI systems. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Attackers can leverage a combination of machine learning\u2013specific techniques and traditional attack vectors to compromise AI systems.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>Other case studies show how attackers can compromise various aspect of the machine learning pipeline and the software stack to conduct<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/10\/07\/machine-learning-data-poisoning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">data poisoning attacks<\/a>, bypass spam detectors, or force AI systems to reveal confidential information.<\/p>\n<p>The matrix and these case studies can guide analysts in finding weak spots in their software and can guide security tool vendors in creating new tools to protect machine learning systems.<\/p>\n<p>\u201cInspecting a single dimension (machine learning vs traditional software security) only provides an incomplete security analysis of the system as a whole,\u201d Chen says. \u201cLike the old saying goes: security is only as&nbsp;strong as its weakest link.\u201d<\/p>\n<h2>Machine learning developers need to pay attention to adversarial threats<\/h2>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-4933 jetpack-lazy-image jetpack-lazy-image--handled\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?resize=696%2C409&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?resize=1024%2C602&amp;ssl=1 1024w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?resize=300%2C176&amp;ssl=1 300w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?resize=768%2C452&amp;ssl=1 768w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?w=1392&amp;ssl=1 1392w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?w=2088&amp;ssl=1 2088w\" alt=\"deep neural network\" width=\"696\" height=\"409\" data-attachment-id=\"4933\" data-permalink=\"https:\/\/bdtechtalks.com\/2019\/05\/28\/what-is-reinforcement-learning\/neural-network-concept\/\" data-orig-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?fit=2125%2C1250&amp;ssl=1\" data-orig-size=\"2125,1250\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;Carlos Romero Oreja&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;concept of neural network and connections generated 3D&quot;,&quot;created_timestamp&quot;:&quot;1396968814&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;neural network concept&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"deep neural network\" data-image-description data-medium-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?fit=300%2C176&amp;ssl=1\" data-large-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?fit=696%2C409&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\"><\/figure>\n<p>Unfortunately, developers and adopters of machine learning algorithms are not taking the necessary measures to make their models robust against adversarial attacks.<\/p>\n<p>\u201cThe current development pipeline is merely ensuring a model trained on a training set can generalize well to a test set, while neglecting the fact that the model is&nbsp;often overconfident about the unseen (out-of-distribution) data or maliciously embbed Trojan patten&nbsp;in&nbsp;the training set, which offers unintended avenues to evasion attacks and<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/04\/27\/deep-learning-mode-connectivity-adversarial-attacks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">backdoor attacks<\/a><span>&nbsp;<\/span>that an adversary can leverage to control or misguide the deployed model,\u201d Chen says. \u201cIn my view, similar to car model development and manufacturing, a comprehensive \u2018in-house collision test\u2019 for different adversarial treats on an AI model should be the new norm to practice to better understand and mitigate potential security risks.\u201d<\/p>\n<p>In his work at IBM Research, Chen has helped develop<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/04\/02\/ai-nlp-paraphrasing-adversarial-attacks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">various<\/a><span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/02\/20\/mit-ibm-ai-robustness-adversarial-examples\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">methods<\/a><span>&nbsp;<\/span>to<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/08\/20\/ai-adversarial-examples-hierarchical-random-switching\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">detect and<\/a><span>&nbsp;<\/span>patch adversarial vulnerabilities in machine learning models. With the advent Adversarial ML Threat Matrix, the efforts of Chen and other AI and security researchers will put developers in a better position to create secure and robust machine learning systems.<\/p>\n<p>\u201cMy hope is that with this study, the model developers and machine learning researchers can pay more attention to the security (robustness) aspect of the model&nbsp;and looking beyond a single performance metric such as accuracy,\u201d Chen says.<\/p>\n<hr>\n<p><i><span>This article was originally published by Ben Dickson on <\/span><\/i><a href=\"https:\/\/bdtechtalks.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>TechTalks<\/span><\/i><\/a><i><span>, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article <a href=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here<\/a>.<\/span><\/i><\/p>\n<p class=\"c-post-pubDate\"> Published November 6, 2020 \u2014 11:00 UTC <\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/neural\/2020\/11\/06\/how-to-protect-your-ai-systems-against-adversarial-machine-learning-syndication\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>With machine learning becoming increasingly popular, one thing that has been worrying experts is the security threats the technology will entail. We are still exploring the possibilities: The breakdown of autonomous driving&#8230;<\/p>\n","protected":false},"author":1,"featured_media":929,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/928"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=928"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/928\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/929"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=928"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=928"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=928"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}