{"id":1827,"date":"2020-12-17T09:36:19","date_gmt":"2020-12-17T09:36:19","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1332318"},"modified":"2020-12-17T09:36:19","modified_gmt":"2020-12-17T09:36:19","slug":"heres-how-neuroscience-can-protect-ai-from-cyberattacks","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=1827","title":{"rendered":"Here\u2019s how neuroscience can protect AI from cyberattacks"},"content":{"rendered":"\n<p>Deep learning has come a long way since the days it could only recognize hand-written characters on checks and envelopes. Today, deep neural networks have become a key component of many<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/12\/30\/computer-vision-applications-deep-learning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">computer vision applications<\/a>, from photo and video editors to medical software and<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2018\/09\/17\/self-driving-cars-ai-computer-vision\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">self-driving cars<\/a>.<\/p>\n<p>Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as we humans do. But they still have a long way to go and make mistakes in situations that humans would never err.<\/p>\n<p>These situations, generally known as<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/07\/15\/machine-learning-adversarial-examples\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">adversarial examples<\/a>, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead machine learning models failing in unpredictable ways or becoming<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/10\/26\/adversarial-machine-learning-threat-matrix\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">vulnerable to cyberattacks<\/a>.<\/p>\n<figure class=\"wp-block-image size-large\" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-4371 lazy\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=696%2C271&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"ai adversarial example panda gibbon\" width=\"696\" height=\"271\" data-attachment-id=\"4371\" data-permalink=\"https:\/\/bdtechtalks.com\/2019\/02\/20\/mit-ibm-ai-robustness-adversarial-examples\/ai-adversarial-example-panda-gibbon\/\" data-orig-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?fit=1342%2C522&amp;ssl=1\" data-orig-size=\"1342,522\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"ai adversarial example panda gibbon\" data-image-description data-medium-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?fit=300%2C117&amp;ssl=1\" data-large-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?fit=696%2C271&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=1024%2C398&amp;ssl=1 1024w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=300%2C117&amp;ssl=1 300w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=768%2C299&amp;ssl=1 768w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=696%2C271&amp;ssl=1 696w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=1068%2C415&amp;ssl=1 1068w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?resize=1080%2C420&amp;ssl=1 1080w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?w=1342&amp;ssl=1 1342w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F12%2F17%2Fis-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Adversarial example: Adding an imperceptible layer of noise to this panda picture causes a convolutional neural network to mistake it for a gibbon.\" data-title=\"Share Adversarial example: Adding an imperceptible layer of noise to this panda picture causes a convolutional neural network to mistake it for a gibbon. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Adversarial example: Adding an imperceptible layer of noise to this panda picture causes a convolutional neural network to mistake it for a gibbon. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Adversarial example: Adding an imperceptible layer of noise to this panda picture causes a convolutional neural network to mistake it for a gibbon.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>Creating AI systems that are resilient against adversarial attacks has become<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/04\/27\/deep-learning-mode-connectivity-adversarial-attacks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">an active<\/a><span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/04\/29\/ai-audio-adversarial-examples\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">area of<\/a><span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/04\/02\/ai-nlp-paraphrasing-adversarial-attacks\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">research<\/a><span>&nbsp;<\/span>and a hot topic of discussion at AI conferences. In<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/01\/14\/what-is-computer-vision\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">computer vision<\/a>, one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system.<\/p>\n<p>Using this approach, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more predictable in their behavior and more robust to adversarial perturbations. In a paper<span>&nbsp;<\/span><a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2020.06.16.154542v1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">published on the bioRxiv preprint server<\/a>, the researchers introduce VOneNet, an architecture that combines current deep learning techniques with neuroscience-inspired neural networks.<\/p>\n<p>The work, done with help from scientists at the University of Munich, Ludwig Maximilian University, and the University of Augsburg, was accepted at the NeurIPS 2020, one of the prominent annual AI conferences, which will be held virtually this year.<\/p>\n<h2>Convolutional neural networks<\/h2>\n<p>The main architecture used in computer vision today is<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/01\/06\/convolutional-neural-networks-cnn-convnets\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">convolutional neural networks<\/a><span>&nbsp;<\/span>(CNN). When stacked on top of each other, multiple convolutional layers can be trained to learn and extract hierarchical features from images. Lower layers find general patterns such as corners and edges, and higher layers gradually become adept at finding more specific things such as objects and people.<\/p>\n<div class=\"wp-block-image\" readability=\"6\">\n<figure class=\"aligncenter size-large is-resized\" readability=\"2\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-4989 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?resize=585%2C803&amp;ssl=1\" sizes=\"(max-width: 585px) 100vw, 585px\" alt=\"Visualization of a neural network's features\" width=\"585\" height=\"803\" data-attachment-id=\"4989\" data-permalink=\"https:\/\/bdtechtalks.com\/2019\/06\/10\/what-is-transfer-learning\/neural-networks-layers-visualization\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?fit=920%2C1262&amp;ssl=1\" data-orig-size=\"920,1262\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"neural networks layers visualization\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?fit=219%2C300&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?fit=696%2C955&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?resize=746%2C1024&amp;ssl=1 746w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?resize=219%2C300&amp;ssl=1 219w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?resize=768%2C1053&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?resize=696%2C955&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?resize=306%2C420&amp;ssl=1 306w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/06\/neural-networks-layers-visualization.jpg?w=920&amp;ssl=1 920w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F12%2F17%2Fis-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Each layer of the neural network will extract specific features from the input image.\" data-title=\"Share Each layer of the neural network will extract specific features from the input image. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Each layer of the neural network will extract specific features from the input image. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Each layer of the neural network will extract specific features from the input image.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<\/div>\n<p>In comparison to the traditional fully connected networks, ConvNets have proven to be both more robust and computationally efficient. There remain, however, fundamental differences between the way CNNs and the human visual system process information.<\/p>\n<p>\u201cDeep neural networks (and convolutional neural networks in particular) have emerged as surprising good models of the visual cortex\u2014surprisingly, they tend to fit experimental data collected from the brain even better than computational models that were tailor-made for explaining the neuroscience data,\u201d David Cox, IBM Director of MIT-IBM Watson AI Lab, told<span>&nbsp;<\/span><em>TechTalks<\/em>. \u201cBut not every deep neural network matches the brain data equally well, and there are some persistent gaps where the brain and the DNNs differ.\u201d<\/p>\n<p>The most prominent of these gaps are adversarial examples, in which subtle perturbations such as a small patch or a layer of imperceptible noise can cause neural networks to misclassify their inputs. These changes go mostly unnoticed to the human eye.<\/p>\n<div class=\"wp-block-image\" readability=\"7\">\n<figure class=\"aligncenter size-large\" readability=\"4\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-4050 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=696%2C390&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"ai adversarial attack stop sign\" width=\"696\" height=\"390\" data-attachment-id=\"4050\" data-permalink=\"https:\/\/bdtechtalks.com\/2018\/12\/27\/deep-learning-adversarial-attacks-ai-malware\/ai-adversarial-attack-stop-sign\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?fit=872%2C488&amp;ssl=1\" data-orig-size=\"872,488\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"ai adversarial attack stop sign\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?fit=300%2C168&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?fit=696%2C390&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?w=872&amp;ssl=1 872w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=300%2C168&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=768%2C430&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=696%2C390&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/12\/ai-adversarial-attack-stop-sign.png?resize=750%2C420&amp;ssl=1 750w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F12%2F17%2Fis-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)\" data-title=\"Share AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org) on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org) on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<\/div>\n<p>\u201cIt is certainly the case that the images that fool DNNs would never fool our own visual systems,\u201d Cox says. \u201cIt\u2019s also the case that DNNs are surprisingly brittle against natural degradations (e.g., adding noise) to images, so robustness in general seems to be an open problem for DNNs. With this in mind, we felt this was a good place to look for differences between brains and DNNs that might be helpful.\u201d<\/p>\n<p>Cox has been exploring the<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2020\/01\/20\/neuroscience-artificial-intelligence-synergies\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">intersection of neuroscience and artificial intelligence<\/a><span>&nbsp;<\/span>since the early 2000s, when he was a student of James DiCarlo, neuroscience professor at MIT. The two have continued to work together since.<\/p>\n<p>\u201cThe brain is an incredibly powerful and effective information processing machine, and it\u2019s tantalizing to ask if we can learn new tricks from it that can be used for practical purposes. At the same time, we can use what we know about artificial systems to provide guiding theories and hypotheses that can suggest experiments to help us understand the brain,\u201d Cox says.<\/p>\n<h2>Brain-like neural networks<\/h2>\n<p>For the new research, Cox and DiCarlo joined Joel Dapello and Tiago Marques, the lead authors of the paper, to see if neural networks became more robust to adversarial attacks when their activations were similar to brain activity. The AI researchers tested several popular CNN architectures trained on the<span>&nbsp;<\/span><a href=\"http:\/\/www.image-net.org\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">ImageNet data set<\/a>, including AlexNet, VGG, and different variations of ResNet. They also included some deep learning models that had undergone \u201cadversarial training,\u201d a process in which a neural network is trained on adversarial examples to avoid misclassifying them.<\/p>\n<p>The scientist evaluated the AI models using<span>&nbsp;<\/span><a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/407007v1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">the \u201cBrainScore\u201d metric<\/a>, which compares activations in deep neural networks and neural responses in the brain. They then measured the robustness of each model by testing it against white-box adversarial attacks, where an attacker has full knowledge of the structure and parameters of the target neural networks.<\/p>\n<p>\u201cTo our surprise, the more brain-like a model was, the more robust the system was against adversarial attacks,\u201d Cox says. \u201cInspired by this, we asked if it was possible to improve robustness (including adversarial robustness) by adding a more faithful simulation of the early visual cortex\u2014based on neuroscience experiments\u2014to the input stage of the network.\u201d<\/p>\n<div class=\"wp-block-image\" readability=\"6.5\">\n<figure class=\"aligncenter size-large is-resized\" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-8925 lazy\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?resize=560%2C532&amp;ssl=1\" sizes=\"(max-width: 560px) 100vw, 560px\" alt=\"neural networks adversarial robustness\" width=\"560\" height=\"532\" data-attachment-id=\"8925\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/12\/07\/vonenet-neurscience-inspired-deep-learning\/neural-networks-adversarial-robustness\/\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?fit=1068%2C1014&amp;ssl=1\" data-orig-size=\"1068,1014\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"neural networks adversarial robustness\" data-image-description data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?fit=300%2C285&amp;ssl=1\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?fit=696%2C661&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?resize=1024%2C972&amp;ssl=1 1024w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?resize=300%2C285&amp;ssl=1 300w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?resize=768%2C729&amp;ssl=1 768w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?resize=696%2C661&amp;ssl=1 696w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?resize=442%2C420&amp;ssl=1 442w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/neural-networks-adversarial-robustness.png?w=1068&amp;ssl=1 1068w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F12%2F17%2Fis-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Research shows that neural networks with higher BrainScores are more robust to white-box adversarial attacks.\" data-title=\"Share Research shows that neural networks with higher BrainScores are more robust to white-box adversarial attacks. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Research shows that neural networks with higher BrainScores are more robust to white-box adversarial attacks. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Research shows that neural networks with higher BrainScores are more robust to white-box adversarial attacks.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<\/div>\n<h2>VOneNet and VOneBlock<\/h2>\n<p>To further validate their findings, the researchers developed VOneNet, a hybrid deep learning architecture that combines standard CNNs with a layer of neuroscience-inspired neural networks.<\/p>\n<p>The VOneNet replaces the first few layers of the CNN with the VOneBlock, a neural network architecture fashioned after the primary visual cortex of primates, also known as the V1 area. This means that image data is first processed by the VOneBlock before being passed on to the rest of the network.<\/p>\n<p>The VOneBlock is itself composed of a<span>&nbsp;<\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Gabor_filter\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Gabor filter bank<\/a><span>&nbsp;<\/span>(GFB), simple and complex cell nonlinearities, and neuronal stochasticity. The GFB is similar to the convolutional layers found in other neural networks. But while classic neural networks with random parameter values and tune them during training, the values of the GFB parameters are determined and fixed based on what we know about activations in the primary visual cortex.<\/p>\n<div class=\"wp-block-image\" readability=\"6.5\">\n<figure class=\"aligncenter size-large is-resized\" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-8926 lazy\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=696%2C812&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" alt=\"VOneBlock architecture\" width=\"696\" height=\"812\" data-attachment-id=\"8926\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/12\/07\/vonenet-neurscience-inspired-deep-learning\/voneblock-architecture\/\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?fit=1304%2C1522&amp;ssl=1\" data-orig-size=\"1304,1522\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"VOneBlock architecture\" data-image-description data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?fit=257%2C300&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?fit=696%2C813&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=877%2C1024&amp;ssl=1 877w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=257%2C300&amp;ssl=1 257w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=768%2C896&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=696%2C812&amp;ssl=1 696w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=1068%2C1247&amp;ssl=1 1068w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?resize=360%2C420&amp;ssl=1 360w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneBlock-architecture.jpg?w=1304&amp;ssl=1 1304w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F12%2F17%2Fis-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: The VOneBlock is a neural network architecture that mimics the functions of the primary visual cortex\" data-title=\"Share The VOneBlock is a neural network architecture that mimics the functions of the primary visual cortex on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share The VOneBlock is a neural network architecture that mimics the functions of the primary visual cortex on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>The VOneBlock is a neural network architecture that mimics the functions of the primary visual cortex<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<\/div>\n<p>\u201cThe weights of the GFB and other architectural choices of the VOneBlock are engineered according to biology. This means that all the choices we made for the VOneBlock were constrained by neurophysiology. In other words, we designed the VOneBlock to mimic as much as possible the primate primary visual cortex (area V1). We considered available data collected over the last four decades from several studies to determine the VOneBlock parameters,\u201d says Tiago Marques, PhD, PhRMA Foundation Postdoctoral Fellow at MIT and co-author of the paper.<\/p>\n<p>While there are significant differences in the visual cortex of different primate, there are also many shared features, especially in the V1 area. \u201cFortunately, across primates differences seem to be minor, and in fact, there are plenty of studies showing that monkeys\u2019 object recognition capabilities resemble those of humans. In our model with used published available data characterizing responses of monkeys\u2019 V1 neurons. While our model is still only an approximation of primate V1 (it does not include all known data and even that data is somewhat limited \u2013 there is a lot that we still do not know about V1 processing), it is a good approximation,\u201d Marques says.<\/p>\n<p>Beyond the GFB layer, the simple and complex cells in the VOneBlock give the neural network flexibility to detect features under different conditions. \u201cUltimately, the goal of object recognition is to identify the existence of objects independently of their exact shape, size, location, and other low-level features,\u201d Marques says. \u201cIn the VOneBlock it seems that both simple and complex cells serve complementary roles in supporting performance under different image perturbations. Simple cells were particularly important for dealing with common corruptions while complex cells with white box adversarial attacks.\u201d<\/p>\n<h2>VOneNet in action<\/h2>\n<p>One of the strengths of the VOneBlock is its compatibility with current CNN architectures. \u201cThe VOneBlock was designed to have a plug-and-play functionality,\u201d Marques says. \u201cThat means that it directly replaces the input layer of a standard CNN structure. A transition layer that follows the core of the VOneBlock ensures that its output can be made compatible with rest of the CNN architecture.\u201d<\/p>\n<p>The researchers plugged the VOneBlock into several CNN architectures that perform well on the ImageNet data set. Interestingly, the addition of this simple block resulted in considerable improvement in robustness to white-box adversarial attacks and outperformed training-based defense methods.<\/p>\n<p>\u201cSimulating the image processing of primate primary visual cortex at the front of standard CNN architectures significantly improves their robustness to image perturbations, even bringing them to outperform state-of-the-art defense methods,\u201d the researchers write in their paper.<\/p>\n<div class=\"wp-block-image\" readability=\"6.5\">\n<figure class=\"aligncenter size-large is-resized\" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"jetpack-lazy-image jetpack-lazy-image--handled wp-image-8929 lazy\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=669%2C903&amp;ssl=1\" sizes=\"(max-width: 669px) 100vw, 669px\" alt=\"VOneNet adversarial robustness\" width=\"669\" height=\"903\" data-attachment-id=\"8929\" data-permalink=\"https:\/\/bdtechtalks.com\/2020\/12\/07\/vonenet-neurscience-inspired-deep-learning\/vonenet-adversarial-robustness\/\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?fit=1088%2C1468&amp;ssl=1\" data-orig-size=\"1088,1468\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"VOneNet adversarial robustness\" data-image-description data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?fit=222%2C300&amp;ssl=1\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?fit=696%2C939&amp;ssl=1\" data-recalc-dims=\"1\" data-lazy-loaded=\"1\" data-lazy=\"true\" data-srcset=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=759%2C1024&amp;ssl=1 759w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=222%2C300&amp;ssl=1 222w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=768%2C1036&amp;ssl=1 768w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=696%2C939&amp;ssl=1 696w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=1068%2C1441&amp;ssl=1 1068w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?resize=311%2C420&amp;ssl=1 311w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/12\/VOneNet-adversarial-robustness.jpg?w=1088&amp;ssl=1 1088w\"><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F12%2F17%2Fis-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Experiments show that convolutional neural networks that have been modified to include the VOneBlock are more resilient against white-box adversarial attacks.\" data-title=\"Share Experiments show that convolutional neural networks that have been modified to include the VOneBlock are more resilient against white-box adversarial attacks. on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Experiments show that convolutional neural networks that have been modified to include the VOneBlock are more resilient against white-box adversarial attacks. on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Experiments show that convolutional neural networks that have been modified to include the VOneBlock are more resilient against white-box adversarial attacks.<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<\/div>\n<p>\u201cThe model of V1 that we added here is actually quite simple\u2014we\u2019re only altering the first stage of the system, while leaving the rest of the network untouched, and the biological fidelity of this V1 model is still quite simple,\u201d Cox says, adding that there is a lot more detail and nuance one could add to such a model to make it better match what is known about the brain.<\/p>\n<p>\u201cSimplicity is strength in some ways, since it isolates a smaller set of principles that might be important, but it would be interesting to explore whether other dimensions of biological fidelity might be important,\u201d he says.<\/p>\n<p>The paper challenges a trend that has become all too common in AI research in the past years. Instead of applying the latest findings of brain mechanisms in their research, many AI scientists focus on driving advances in the field by taking advantage of the availability of vast computing resources and large data sets to train larger and larger neural networks. And as we\u2019ve discussed in these pages before, that approach<span>&nbsp;<\/span><a href=\"https:\/\/bdtechtalks.com\/2019\/11\/25\/ai-research-neural-networks-compute-costs\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">presents many challenges to AI research<\/a>.<\/p>\n<p>VOneNet proves that biological intelligence still has a lot of untapped potential and can address some of the fundamental problems AI research is facing. \u201cThe models presented here, drawn directly from primate neurobiology, indeed require less training to achieve more human-like behavior. This is one turn of a new virtuous circle, wherein neuroscience and artificial intelligence each feed into and reinforce the understanding and ability of the other,\u201d the authors write.<\/p>\n<p>In the future, the researchers will further explore the properties of VOneNet and the further integration of discoveries in neuroscience and artificial intelligence. \u201cOne limitation of our current work is that while we have shown that adding a V1 block leads to improvements, we don\u2019t have a great handle on&nbsp;<em>why<\/em>&nbsp;it does,\u201d Cox says.<\/p>\n<p>Developing the theory to help understand this \u201cwhy\u201d question will enable the AI researchers to ultimately home in on what really matters and to build more effective systems. They also plan to explore the integration of neuroscience-inspired architectures beyond the initial layers of artificial neural networks.<\/p>\n<p>Says Cox, \u201cWe\u2019ve only just scratched the surface in terms of incorporating these elements of biological realism into DNNs, and there\u2019s a lot more we can still do. We\u2019re excited to see where this journey takes us.\u201d<\/p>\n<p><i><span>This article was originally published by<span>&nbsp;<\/span><a class=\"author url fn\" title=\"Posts by Ben Dickson\" href=\"https:\/\/bdtechtalks.com\/author\/bendee983\/\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">Ben Dickson<\/a> on <\/span><\/i><a href=\"https:\/\/bdtechtalks.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>TechTalks<\/span><\/i><\/a><i><span>, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article <a href=\"https:\/\/bdtechtalks.com\/2020\/12\/07\/vonenet-neurscience-inspired-deep-learning\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here<\/a>.&nbsp;<\/span><\/i><\/p>\n<p class=\"c-post-pubDate\"> Published December 17, 2020 \u2014 09:36 UTC <\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/neural\/2020\/12\/17\/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks-syndication\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deep learning has come a long way since the days it could only recognize hand-written characters on checks and envelopes. Today, deep neural networks have become a key component of many&nbsp;computer vision&#8230;<\/p>\n","protected":false},"author":1,"featured_media":1828,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/1827"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1827"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/1827\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/1828"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1827"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1827"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1827"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}