{"id":2157,"date":"2021-01-08T08:48:03","date_gmt":"2021-01-08T08:48:03","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1333404"},"modified":"2021-01-08T08:48:03","modified_gmt":"2021-01-08T08:48:03","slug":"adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=2157","title":{"rendered":"Adversarial attacks are a ticking time bomb, but no one cares"},"content":{"rendered":"\n<p>If you\u2019ve been following news about artificial intelligence, you\u2019ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause AI systems to behave erratically. Known as<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2018\/12\/27\/deep-learning-adversarial-attacks-ai-malware\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">adversarial examples or adversarial attacks<\/a>, these images\u2014and their<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/04\/29\/ai-audio-adversarial-examples\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">audio<\/a><span>&nbsp;<\/span>and textual<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/04\/02\/ai-nlp-paraphrasing-adversarial-attacks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">counterparts<\/a>\u2014have become a source of growing interest and concern for the machine learning community.<\/p>\n<p>But despite the growing body of research on&nbsp;adversarial<a href=\"https:\/\/bdtechtalks.com\/2020\/07\/15\/machine-learning-adversarial-examples\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"> machine learning<\/a>, the numbers show that there has been little progress in tackling adversarial attacks in real-world applications.<\/p>\n<p>The fast-expanding adoption of machine learning makes it paramount that the tech community traces a roadmap to secure the AI systems against adversarial attacks. Otherwise, adversarial machine learning can be a disaster in the making.<\/p>\n<figure class=\"wp-block-image size-large\" readability=\"4\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-4050 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=696%2C390&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"ai adversarial attack stop sign\" width=\"696\" height=\"390\" data-attachment-id=\"4050\" data-permalink=\"https:\/\/bdtechtalks.com\/2018\/12\/27\/deep-learning-adversarial-attacks-ai-malware\/ai-adversarial-attack-stop-sign\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?fit=872%2C488&amp;ssl=1\" data-orig-size=\"872,488\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"ai adversarial attack stop sign\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?fit=300%2C168&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?fit=696%2C390&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?w=872&amp;ssl=1 872w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=300%2C168&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=768%2C430&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=696%2C390&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=750%2C420&amp;ssl=1 750w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2021%2F01%2F08%2Fadversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)\" data-title=\"Share AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org) on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org) on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<h2>What makes adversarial attacks different?<\/h2>\n<p>Every type of software has its own unique security vulnerabilities, and with new trends in software, new threats emerge. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. Deserialization attacks exploit flaws in the way programming languages such as Java and Python transfer information between applications and processes. And more recently, we\u2019ve seen a surge in<span>&nbsp;<\/span><a href=\"https:\/\/portswigger.net\/daily-swig\/prototype-pollution-the-dangerous-and-underrated-vulnerability-impacting-javascript-applications\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">prototype pollution attacks<\/a>, which use peculiarities in the JavaScript language to cause erratic behavior on NodeJS servers.<\/p>\n<p>In this regard, adversarial attacks are no different than other cyberthreats. As machine learning becomes<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/12\/30\/computer-vision-applications-deep-learning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">an important component of many applications<\/a>, bad actors will look for ways to plant and trigger malicious behavior in AI models.<\/p>\n<p><em>[Read:&nbsp;<a class=\"c-link c-message_attachment__title_link\" href=\"https:\/\/thenextweb.com\/dutch-disruptors\/2020\/12\/15\/meet-the-4-scale-ups-using-data-to-save-the-planet\/\" target=\"_blank\" rel=\"noreferrer noopener\" data-qa=\"message_attachment_title_link\"><span dir=\"auto\">Meet the 4 scale-ups using data to save the planet<\/span><\/a>]<\/em><\/p>\n<p>What makes adversarial attacks different, however, is their nature and the possible countermeasures. For most security vulnerabilities, the boundaries are very clear. Once a bug is found, security analysts can precisely document the conditions under which it occurs and find the part of the source code that is causing it. The response is also straightforward. For instance, SQL injection vulnerabilities are the result of not sanitizing user input. Buffer overflow bugs happen when you copy string arrays without setting limits on the number of bytes copied from the source to the destination.<\/p>\n<p>In most cases, adversarial attacks exploit peculiarities in the learned parameters of machine learning models. An attacker probes a target model by meticulously making changes to its input until it produces the desired behavior. For instance, by making gradual changes to the pixel values of an image, an attacker can cause the<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/01\/06\/convolutional-neural-networks-cnn-convnets\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">convolutional neural network<\/a><span>&nbsp;<\/span>to change its prediction from, say, \u201cturtle\u201d to \u201crifle.\u201d The adversarial perturbation is usually a layer of noise that is imperceptible to the human eye.<\/p>\n<p>(Note: in some cases, such as<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/10\/07\/machine-learning-data-poisoning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">data poisoning<\/a>, adversarial attacks are made possible through vulnerabilities in other components of the machine learning pipeline, such as a tampered training data set.)<\/p>\n<figure class=\"wp-block-image size-large\" readability=\"2.856\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-3660 lazy\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=696%2C391&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"turtle rifle\" width=\"696\" height=\"391\" data-attachment-id=\"3660\" data-permalink=\"https:\/\/bdtechtalks.com\/2018\/09\/25\/explainable-interpretable-ai\/turtle-rifle\/\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?fit=2554%2C1434&amp;ssl=1\" data-orig-size=\"2554,1434\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"turtle rifle\" data-image-description data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?fit=300%2C168&amp;ssl=1\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?fit=696%2C391&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=1024%2C575&amp;ssl=1 1024w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=300%2C168&amp;ssl=1 300w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=768%2C431&amp;ssl=1 768w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=696%2C391&amp;ssl=1 696w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=1068%2C600&amp;ssl=1 1068w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=748%2C420&amp;ssl=1 748w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?resize=1920%2C1078&amp;ssl=1 1920w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?w=1392&amp;ssl=1 1392w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/09\/turtle-rifle.png?w=2088&amp;ssl=1 2088w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2021%2F01%2F08%2Fadversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source:&nbsp;LabSix)\" data-title=\"Share A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source:&nbsp;LabSix) on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source:&nbsp;LabSix) on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source:&nbsp;<a href=\"http:\/\/www.labsix.org\/physical-objects-that-fool-neural-nets\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">LabSix<\/a>)<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>The statistical nature of machine learning makes it difficult to find and patch adversarial attacks. An adversarial attack that works under some conditions might fail in others, such as a change of angle or lighting conditions. Also, you can\u2019t point to a line of code that is causing the vulnerability because it spread across the thousands and millions of parameters that constitute the model.<\/p>\n<p>Defenses against adversarial attacks are also a bit fuzzy. Just as you can\u2019t pinpoint a location in an AI model that is causing an adversarial vulnerability, you also can\u2019t find a precise patch for the bug. Adversarial defenses usually involve statistical adjustments or general changes to the architecture of the machine learning model.<\/p>\n<p>For instance, one popular method is adversarial training, where researchers probe a model to produce adversarial examples and then retrain the model on those examples and their correct labels. Adversarial training readjusts all the parameters of the model to make it robust against the types of examples it has been trained on. But with enough rigor, an attacker can find other noise patterns to create adversarial examples.<\/p>\n<p>The plain truth is, we are still learning how to cope with adversarial machine learning. Security researchers are used to perusing code for vulnerabilities. Now they must learn to find security holes in machine learning that are composed of millions of numerical parameters.<\/p>\n<h2>Growing interest in adversarial machine learning<\/h2>\n<p>Recent years have seen a surge in the number of papers on adversarial attacks. To track the trend, I searched the arXiv preprint server for papers that mention \u201cadversarial attacks\u201d or \u201cadversarial examples\u201d in the abstract section. In<span>&nbsp;<\/span><a href=\"https:\/\/arxiv.org\/search\/advanced?advanced=&amp;terms-0-operator=AND&amp;terms-0-term=%22adversarial+attack%22&amp;terms-0-field=abstract&amp;terms-1-operator=OR&amp;terms-1-term=%22adversarial+example%22&amp;terms-1-field=abstract&amp;classification-physics_archives=all&amp;classification-include_cross_list=include&amp;date-filter_by=specific_year&amp;date-year=2014&amp;date-from_date=&amp;date-to_date=&amp;date-date_type=submitted_date&amp;abstracts=show&amp;size=50&amp;order=-announced_date_first\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">2014<\/a>, there were zero papers on adversarial machine learning.<span>&nbsp;<\/span><a href=\"https:\/\/arxiv.org\/search\/advanced?advanced=&amp;terms-0-operator=AND&amp;terms-0-term=%22adversarial+attack%22&amp;terms-0-field=abstract&amp;terms-1-operator=OR&amp;terms-1-term=%22adversarial+example%22&amp;terms-1-field=abstract&amp;classification-physics_archives=all&amp;classification-include_cross_list=include&amp;date-filter_by=specific_year&amp;date-year=2020&amp;date-from_date=&amp;date-to_date=&amp;date-date_type=submitted_date&amp;abstracts=show&amp;size=50&amp;order=-announced_date_first\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">In 2020<\/a>, around 1,100 papers on adversarial examples and attacks were submitted to arxiv.<\/p>\n<div class=\"wp-block-image\" readability=\"7.5\">\n<figure class=\"aligncenter size-large is-resized\" readability=\"5\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-9014 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?resize=696%2C478&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"adversarial machine learning papers\" width=\"696\" height=\"478\" data-attachment-id=\"9014\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/12\/16\/machine-learning-adversarial-attacks-against-machine-learning-time-bomb\/adversarial-machine-learning-papers\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?fit=468%2C321&amp;ssl=1\" data-orig-size=\"468,321\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"adversarial machine learning papers\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?fit=300%2C206&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?fit=468%2C321&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?w=468&amp;ssl=1 468w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?resize=300%2C206&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?resize=100%2C70&amp;ssl=1 100w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/adversarial-machine-learning-papers.jpg?resize=218%2C150&amp;ssl=1 218w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2021%2F01%2F08%2Fadversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year.\" data-title=\"Share From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<\/div>\n<p>Adversarial attacks and defense methods have also become a key highlight of prominent AI conferences such as NeurIPS and ICLR. Even cybersecurity conferences such as DEF CON, Black Hat, and Usenix have started featuring workshops and presentations on adversarial attacks.<\/p>\n<p>The research presented at these conferences shows tremendous progress in detecting adversarial vulnerabilities and developing defense methods that can make machine learning models more robust. For instance, researchers have found new ways to protect machine learning models against adversarial attacks using<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/08\/20\/ai-adversarial-examples-hierarchical-random-switching\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">random switching mechanisms<\/a><span>&nbsp;<\/span>and<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/12\/07\/vonenet-neurscience-inspired-deep-learning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">insights from neuroscience<\/a>.<\/p>\n<p>It is worth noting, however, that AI and security conferences focus on cutting edge research. And there\u2019s a sizeable gap between the work presented at AI conferences and the practical work done at organizations every day.<\/p>\n<h2>The lackluster response to adversarial attacks<\/h2>\n<p>Alarmingly, despite growing interest in and louder warnings on the threat of adversarial attacks, there\u2019s very little activity around tracking adversarial vulnerabilities in real-world applications.<\/p>\n<p>I referred to several sources that track bugs, vulnerabilities, and bug bounties. For instance, out of more than 145,000 records in the NIST National Vulnerability Database, there are no entries on adversarial attacks or adversarial examples. A search for \u201cmachine learning\u201d returns five results. Most of them are cross-site scripting (XSS) and XML external entity (XXE) vulnerabilities in systems that contain machine learning components. One of them regards a vulnerability that allows an attacker to create a copy-cat version of a machine learning model and gain insights, which could be a window to adversarial attacks. But there are no direct reports on adversarial vulnerabilities. A search for \u201cdeep learning\u201d shows a single<span>&nbsp;<\/span><a href=\"https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2017-5719\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">critical flaw<\/a><span>&nbsp;<\/span>filed in November 2017. But again, it\u2019s not an adversarial vulnerability but rather a flaw in another component of a deep learning system.<\/p>\n<figure class=\"wp-block-image size-large\" readability=\"2\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-9017 lazy\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=696%2C420&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"NVD adversarial machine learning\" width=\"696\" height=\"420\" data-attachment-id=\"9017\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/12\/16\/machine-learning-adversarial-attacks-against-machine-learning-time-bomb\/nvd-adversarial-machine-learning\/\" data-orig-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?fit=2772%2C1674&amp;ssl=1\" data-orig-size=\"2772,1674\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"NVD adversarial machine learning\" data-image-description data-medium-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?fit=300%2C181&amp;ssl=1\" data-large-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?fit=696%2C420&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=1024%2C618&amp;ssl=1 1024w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=300%2C181&amp;ssl=1 300w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=768%2C464&amp;ssl=1 768w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=1536%2C928&amp;ssl=1 1536w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=2048%2C1237&amp;ssl=1 2048w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=696%2C420&amp;ssl=1 696w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=1068%2C645&amp;ssl=1 1068w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=695%2C420&amp;ssl=1 695w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?resize=1920%2C1159&amp;ssl=1 1920w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/NVD-adversarial-machine-learning.jpg?w=1392&amp;ssl=1 1392w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2021%2F01%2F08%2Fadversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: The National Vulnerability Database contains very little information on adversarial attacks\" data-title=\"Share The National Vulnerability Database contains very little information on adversarial attacks on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share The National Vulnerability Database contains very little information on adversarial attacks on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>The National Vulnerability Database contains very little information on adversarial attacks<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>I also checked GitHub\u2019s Advisory database, which tracks security and bug fixes on projects hosted on GitHub. Search for \u201cadversarial attacks,\u201d \u201cadversarial examples,\u201d \u201cmachine learning,\u201d and \u201cdeep learning\u201d yielded no results. A search for \u201cTensorFlow\u201d yields 41 records, but they\u2019re mostly bug reports on the codebase of TensorFlow. There\u2019s nothing about adversarial attacks or hidden vulnerabilities in the parameters of TensorFlow models.<\/p>\n<p>This is noteworthy because GitHub already hosts many<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/02\/15\/what-is-deep-learning-neural-networks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">deep learning<\/a><span>&nbsp;<\/span>models and pretrained<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/08\/05\/what-is-artificial-neural-network-ann\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">neural networks<\/a>.<\/p>\n<figure class=\"wp-block-image size-large\" readability=\"2\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-9019 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=696%2C527&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"GitHub Advisory adversarial attacks\" width=\"696\" height=\"527\" data-attachment-id=\"9019\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/12\/16\/machine-learning-adversarial-attacks-against-machine-learning-time-bomb\/github-advisory-adversarial-attacks\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?fit=2016%2C1526&amp;ssl=1\" data-orig-size=\"2016,1526\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"GitHub Advisory adversarial attacks\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?fit=300%2C227&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?fit=696%2C527&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=1024%2C775&amp;ssl=1 1024w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=300%2C227&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=768%2C581&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=1536%2C1163&amp;ssl=1 1536w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=80%2C60&amp;ssl=1 80w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=696%2C527&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=1068%2C808&amp;ssl=1 1068w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=555%2C420&amp;ssl=1 555w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?resize=1920%2C1453&amp;ssl=1 1920w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?w=2016&amp;ssl=1 2016w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/GitHub-Advisory-adversarial-attacks.jpg?w=1392&amp;ssl=1 1392w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2021%2F01%2F08%2Fadversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: GitHub Advisory contains no records on adversarial attacks.\" data-title=\"Share GitHub Advisory contains no records on adversarial attacks. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share GitHub Advisory contains no records on adversarial attacks. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>GitHub Advisory contains no records on adversarial attacks.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>Finally, I checked HackerOne, the platform many companies use to run bug bounty programs. Here too, none of the reports contained any mention of adversarial attacks.<\/p>\n<p>While this might not be a very precise assessment, the fact that none of these sources have anything on adversarial attacks is very telling.<\/p>\n<h2>The growing threat of adversarial attacks<\/h2>\n<figure class=\"wp-block-image size-large\" readability=\"4\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-8637 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=696%2C392&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"machine learning adversarial examples neural network\" width=\"696\" height=\"392\" data-attachment-id=\"8637\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/machine-learning-adversarial-examples-neural-network\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?fit=1920%2C1080&amp;ssl=1\" data-orig-size=\"1920,1080\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"machine learning adversarial examples neural network\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?fit=300%2C169&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?fit=696%2C392&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=1024%2C576&amp;ssl=1 1024w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=300%2C169&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=768%2C432&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=1536%2C864&amp;ssl=1 1536w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=696%2C392&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=1068%2C601&amp;ssl=1 1068w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?resize=747%2C420&amp;ssl=1 747w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?w=1920&amp;ssl=1 1920w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/machine-learning-adversarial-examples-neural-network.jpg?w=1392&amp;ssl=1 1392w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2021%2F01%2F08%2Fadversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.\" data-title=\"Share Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>Automated defense is another area that is worth discussing. When it comes to code-based vulnerabilities Developers have a large set of defensive tools at their disposal.<\/p>\n<p>Static analysis tools can help developers find vulnerabilities in their code. Dynamic testing tools examine an application at runtime for vulnerable patterns of behavior. Compilers already use many of these techniques to track and patch vulnerabilities. Today, even your browser is equipped with tools to find and block possibly malicious code in client-side script.<\/p>\n<p>At the same time, organizations have learned to combine these tools with the right policies to enforce secure coding practices. Many companies have adopted procedures and practices to rigorously test applications for known and potential vulnerabilities before making them available to the public. For instance, GitHub, Google, and Apple make use of these and other tools to vet the millions of applications and projects uploaded on their platforms.<\/p>\n<p>But the tools and procedures for defending machine learning systems against adversarial attacks are still in the preliminary stages. This is partly why we\u2019re seeing very few reports and advisories on adversarial attacks.<\/p>\n<p>Meanwhile, another worrying trend is the growing use of deep learning models by developers of all levels. Ten years ago, only people who had a full understanding of machine learning and deep learning algorithms could use them in their applications. You had to know how to set up a neural network, tune the hyperparameters through intuition and experimentation, and you also needed access to the compute resources that could train the model.<\/p>\n<p>But today, integrating a pre-trained neural network into an application is very easy.<\/p>\n<p>For instance, PyTorch, which is one of the leading Python deep learning platforms,<span>&nbsp;<\/span><a href=\"https:\/\/pytorch.org\/docs\/stable\/hub.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">has a tool<\/a><span>&nbsp;<\/span>that enables machine learning engineers to publish pretrained neural networks on GitHub and make them accessible to developers. If you want to integrate an image classifier deep learning model into your application, you only need a rudimentary knowledge of deep learning and PyTorch.<\/p>\n<p>Since GitHub has no procedure to detect and block adversarial vulnerabilities, a malicious actor could easily use these kinds of tools to publish deep learning models that have hidden backdoors and exploit them after thousands of developers integrate them in their applications.<\/p>\n<h2>How to address the threat of adversarial attacks<\/h2>\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?ssl=1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-8628 jetpack-lazy-image jetpack-lazy-image--handled lazy\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=696%2C392&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"Adversarial machine learning threat matrix\" width=\"696\" height=\"392\" data-attachment-id=\"8628\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/adversarial-machine-learning-threat-matrix\/\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?fit=2560%2C1440&amp;ssl=1\" data-orig-size=\"2560,1440\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Adversarial machine learning threat matrix\" data-image-description data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?fit=300%2C169&amp;ssl=1\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?fit=696%2C392&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=1024%2C576&amp;ssl=1 1024w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=300%2C169&amp;ssl=1 300w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=768%2C432&amp;ssl=1 768w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=1536%2C864&amp;ssl=1 1536w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=2048%2C1152&amp;ssl=1 2048w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=696%2C392&amp;ssl=1 696w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=1068%2C601&amp;ssl=1 1068w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=747%2C420&amp;ssl=1 747w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?resize=1920%2C1080&amp;ssl=1 1920w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/10\/Adversarial-machine-learning-threat-matrix.jpg?w=1392&amp;ssl=1 1392w\"><\/a><\/figure>\n<p>Understandably, given the statistical nature of adversarial attacks, it\u2019s difficult to address them with the same methods used against code-based vulnerabilities. But fortunately, there have been some positive developments that can guide future steps.<\/p>\n<p>The<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Adversarial ML Threat Matrix<\/a>, published last month by researchers at Microsoft, IBM, Nvidia, MITRE, and other security and AI companies, provides security researchers with a framework to find weak spots and potential adversarial vulnerabilities in software ecosystems that include machine learning components. The Adversarial ML Threat Matrix follows the ATT&amp;CK framework, a known and trusted format among security researchers.<\/p>\n<p>Another useful project is IBM\u2019s Adversarial Robustness Toolbox, an open-source Python library that provides tools to evaluate machine learning models for adversarial vulnerabilities and help developers harden their AI systems.<\/p>\n<p>These and other adversarial defense tools that will be developed in the future need to be backed by the right policies to make sure machine learning models are safe. Software platforms such as GitHub and Google Play must establish procedures and integrate some of these tools into the vetting process of applications that include machine learning models. Bug bounties for adversarial vulnerabilities can also be a good measure to make sure the machine learning systems used by millions of users are robust.<\/p>\n<p>New regulations for the security of machine learning systems might also be necessary. Just as the software that handles sensitive operations and information is expected to conform to a set of standards, machine learning algorithms used in critical applications such as biometric authentication and medical imaging must be audited for robustness against adversarial attacks.<\/p>\n<p>As the adoption of machine learning continues to expand, the threat of adversarial attacks is becoming more imminent. Adversarial vulnerabilities are a ticking timebomb. Only a systematic response can defuse it.<\/p>\n<p><i><span>This article was originally published by Ben Dickson on <\/span><\/i><a href=\"https:\/\/bdtechtalks.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>TechTalks<\/span><\/i><\/a><i><span>, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article <a href=\"https:\/\/bdtechtalks.com\/2020\/12\/16\/machine-learning-adversarial-attacks-against-machine-learning-time-bomb\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here<\/a>.&nbsp;<\/span><\/i><\/p>\n<p class=\"c-post-pubDate\"> Published January 8, 2021 \u2014 08:48 UTC <\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/neural\/2021\/01\/08\/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you\u2019ve been following news about artificial intelligence, you\u2019ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause&#8230;<\/p>\n","protected":false},"author":1,"featured_media":2158,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/2157"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2157"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/2157\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/2158"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2157"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2157"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}