{"id":8925,"date":"2021-11-13T15:00:37","date_gmt":"2021-11-13T15:00:37","guid":{"rendered":"http:\/\/TheNextWeb=1373081"},"modified":"2021-11-13T15:00:37","modified_gmt":"2021-11-13T15:00:37","slug":"what-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=8925","title":{"rendered":"What 100 suicide notes taught us about creating more empathetic chatbots"},"content":{"rendered":"\n<p>While the art of conversation in machines is limited, there are improvements with every iteration. As machines are developed to navigate complex conversations, there will be technical and ethical challenges in how they detect and respond to sensitive human issues.<\/p>\n<p>Our work involves building chatbots for a range of uses in health care. Our system, which incorporates multiple algorithms used inartificial intelligence (AI) and natural language processing, has been in development at the <a href=\"https:\/\/aehrc.csiro.au\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Australian e-Health Research Centre<\/a> since 2014.<\/p>\n<p>The system has generated several chatbot apps which are being trialed among selected individuals, usually with an underlying medical condition or who require reliable health-related information.<\/p>\n<p>They include <a href=\"https:\/\/theconversation.com\/new-app-helps-people-with-neurological-conditions-practise-speech-51665\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">HARLIE<\/a> for Parkinson\u2019s disease and <a href=\"https:\/\/theconversation.com\/the-future-of-chatbots-is-more-than-just-small-talk-53293\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Autism Spectrum Disorder<\/a>, <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/33234441\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Edna<\/a> for people undergoing genetic counselling, Dolores for people living with chronic pain, and Quin for people who want to quit smoking.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\" readability=\"7.8412698412698\">\n<p lang=\"en\" dir=\"ltr\">RECOVER\u2019s resident robot was a huge hit at our recent photoshoot. Our team are currently developing two <a href=\"https:\/\/twitter.com\/hashtag\/chatbots?src=hash&amp;ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">#chatbots<\/a> for people with <a href=\"https:\/\/twitter.com\/hashtag\/whiplash?src=hash&amp;ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">#whiplash<\/a> and <a href=\"https:\/\/twitter.com\/hashtag\/chronicpain?src=hash&amp;ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">#chronicpain<\/a>. Dolores will be set loose at local pain clinics next month. <a href=\"https:\/\/t.co\/ThG8danV8l\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">pic.twitter.com\/ThG8danV8l<\/a><\/p>\n<p>\u2014 UQ RECOVER Injury Research Centre (@RecoverResearch) <a href=\"https:\/\/twitter.com\/RecoverResearch\/status\/1394776246525960195?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">May 18, 2021<\/a><\/p>\n<\/blockquote>\n<p><a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/?term=%28suicide%29+AND+%28%28autism%29+OR+%28smoking%29+OR+%28chronic+pain%29+OR+%28parkinson%27s+disease%29%29&amp;sort=\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Research<\/a> has shown those people with certain underlying medical conditions are more likely to think about suicide than the general public. We have to make sure our chatbots take this into account.<\/p>\n<figure class=\"align-center \" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" alt=\"Siri often doesn\u2019t understand the sentiment behind and context of phrases. Screenshot\/Author provided image\" width=\"600\" height=\"450\" class=\"js-lazy\" data-srcset=\"https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=3 2262w\"><figcaption><a href=\"https:\/\/thenextweb.com\/news\/what-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Feditorial.thenextweb.com%2Fneural%2F2021%2F11%2F13%2Fwhat-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Siri often doesn\u2019t understand the sentiment behind and context of phrases. Screenshot\/Author provided image\" data-title=\"Share Siri often doesn\u2019t understand the sentiment behind and context of phrases. Screenshot\/Author provided image on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Siri often doesn\u2019t understand the sentiment behind and context of phrases. Screenshot\/Author provided image on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Siri often doesn\u2019t understand the sentiment behind and context of phrases. Screenshot\/Author provided image<\/figcaption><noscript><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Siri often doesn\u2019t understand the sentiment behind and context of phrases. Screenshot\/Author provided image\" width=\"600\" height=\"450\" class srcset=\"https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/431393\/original\/file-20211110-6892-12wzwoz.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=3 2262w\"><\/noscript><\/figure>\n<\/p>\n<\/figure>\n<p>We believe the safest approach to understanding the language patterns of people with suicidal thoughts is to study their messages. The choice and arrangement of their words, the sentiment and the rationale all offer insight into the author\u2019s thoughts.<\/p>\n<p>For our <a href=\"https:\/\/ebooks.iospress.nl\/volumearticle\/56629\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">recent work<\/a> we examined more than 100 suicide notes from various <a href=\"https:\/\/www.amazon.com\/Suicide-Notes-Predictive-Clues-Patterns\/dp\/0898853990\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">texts<\/a> and identified four relevant language patterns: negative sentiment, constrictive thinking, idioms and logical fallacies.<\/p>\n<p><em><strong>Read more: <a href=\"https:\/\/theconversation.com\/introducing-edna-the-chatbot-trained-to-help-patients-make-a-difficult-medical-decision-150847\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Introducing Edna: the chatbot trained to help patients make a difficult medical decision<\/a><br \/><\/strong><\/em><\/p>\n<h2>Negative sentiment and constrictive thinking<\/h2>\n<p>As one would expect, many phrases in the notes we analyzed expressed negative sentiment such as:<\/p>\n<blockquote readability=\"6\">\n<p>\u2026just this heavy, overwhelming despair\u2026<\/p>\n<\/blockquote>\n<p>There was also language that pointed to constrictive thinking. For example:<\/p>\n<blockquote readability=\"5\">\n<p>I will <em>never<\/em> escape the darkness or misery\u2026<\/p>\n<\/blockquote>\n<p>The phenomenon of constrictive thoughts and language is <a href=\"http:\/\/www.suicidology-online.com\/pdf\/SOL-2010-1-5-18.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">well documented<\/a>. Constrictive thinking considers the absolute when dealing with a prolonged source of distress.<\/p>\n<p>For the author in question, there is no compromise. The language that manifests as a result often contains terms such as <em>either\/or, always, never, forever, nothing, totally, all<\/em> and <em>only<\/em>.<\/p>\n<h2>Language idioms<\/h2>\n<p>Idioms such as \u201cthe grass is greener on the other side\u201d were also common \u2014 although not directly linked to suicidal ideation. Idioms are often colloquial and culturally derived, with the real meaning being vastly different from the literal interpretation.<\/p>\n<p>Such idioms are problematic for chatbots to understand. Unless a bot has been programmed with the intended meaning, it will operate under the assumption of a literal meaning.<\/p>\n<p>Chatbots can make some disastrous mistakes if they\u2019re not encoded with knowledge of the real meaning behind certain idioms. In the example below, a more suitable response from Siri would have been to redirect the user to a crisis hotline.<\/p>\n<figure class=\"align-center \" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" alt=\"An example of Apple\u2019s Siri giving an inappropriate response to the search query: \u2018How do I tie a hangman\u2019s noose it\u2019s time to bite the dust\u2019? Author provided image\" width=\"600\" height=\"450\" class=\"js-lazy\" data-srcset=\"https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=3 2262w\"><figcaption><a href=\"https:\/\/thenextweb.com\/news\/what-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Feditorial.thenextweb.com%2Fneural%2F2021%2F11%2F13%2Fwhat-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: An example of Apple\u2019s Siri giving an inappropriate response to the search query: \u2018How do I tie a hangman\u2019s noose it\u2019s time to bite the dust\u2019? Author provided image\" data-title=\"Share An example of Apple\u2019s Siri giving an inappropriate response to the search query: \u2018How do I tie a hangman\u2019s noose it\u2019s time to bite the dust\u2019? Author provided image on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share An example of Apple\u2019s Siri giving an inappropriate response to the search query: \u2018How do I tie a hangman\u2019s noose it\u2019s time to bite the dust\u2019? Author provided image on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>An example of Apple\u2019s Siri giving an inappropriate response to the search query: \u2018How do I tie a hangman\u2019s noose it\u2019s time to bite the dust\u2019? Author provided image<\/figcaption><noscript><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"An example of Apple\u2019s Siri giving an inappropriate response to the search query: \u2018How do I tie a hangman\u2019s noose it\u2019s time to bite the dust\u2019? Author provided image\" width=\"600\" height=\"450\" class srcset=\"https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=450&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/429473\/original\/file-20211031-21-eduz7j.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=566&amp;fit=crop&amp;dpr=3 2262w\"><\/noscript><\/figure>\n<\/p>\n<\/figure>\n<h2>The fallacies in reasoning<\/h2>\n<p>Words such as <em>therefore, ought<\/em> and their various synonyms require special attention from chatbots. That\u2019s because these are often bridge words between a thought and action. Behind them is some logic consisting of a premise that reaches a conclusion, <a href=\"https:\/\/www.goodreads.com\/book\/show\/22920682-the-burning-brand\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">such as<\/a>:<\/p>\n<blockquote readability=\"10\">\n<p>If I were dead, she would go on living, laughing, trying her luck. But she has thrown me over and still does all those things. <em>Therefore<\/em>, I am as dead.<\/p>\n<\/blockquote>\n<p>This closely resemblances a common fallacy (an example of faulty reasoning) called <a href=\"https:\/\/en.wikipedia.org\/wiki\/Affirming_the_consequent\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">affirming the consequent<\/a>. Below is a more pathological example of this, which has been called <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/epdf\/10.1111\/j.1943-278X.1981.tb01006.x\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">catastrophic logic<\/a>:<\/p>\n<blockquote readability=\"6\">\n<p>I have failed at everything. If I do this, I will succeed.<\/p>\n<\/blockquote>\n<p>This is an example of a semantic <a href=\"https:\/\/plato.stanford.edu\/entries\/fallacies\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">fallacy<\/a> (and constrictive thinking) concerning the meaning of <em>I<\/em>, which changes between the two clauses that make up the second sentence.<\/p>\n<p><a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/6757205\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">This fallacy<\/a> occurs when the author expresses they will experience feelings such as happiness or success after completing suicide \u2014 which is what <em>this<\/em> refers to in the note above. This kind of <a href=\"https:\/\/www.amazon.com\/Voices-Death-Edwin-S-Shneidman\/dp\/0060140232\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">\u201cautopilot\u201d mode<\/a> was often described by people who gave psychological recounts in interviews after attempting suicide.<\/p>\n<h2>Preparing future chatbots<\/h2>\n<p>The good news is detecting negative sentiment and constrictive language can be achieved with off-the-shelf algorithms and publicly available data. Chatbot developers can (and should) implement these algorithms.<\/p>\n<figure class=\"align-center zoomable\" readability=\"3\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" alt=\"Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image\" width=\"600\" height=\"545\" class=\"js-lazy\" data-srcset=\"https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=545&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=545&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=545&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=684&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=684&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=684&amp;fit=crop&amp;dpr=3 2262w\"><noscript><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image\" width=\"600\" height=\"545\" class srcset=\"https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=545&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=545&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=545&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=684&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=684&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/429547\/original\/file-20211101-19-1t1eq8d.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=684&amp;fit=crop&amp;dpr=3 2262w\"><\/noscript><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/news\/what-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Feditorial.thenextweb.com%2Fneural%2F2021%2F11%2F13%2Fwhat-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image\" data-title=\"Share Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>Generally speaking, the bot\u2019s performance and detection accuracy will depend on the quality and size of the training data. As such, there should never be just one algorithm involved in detecting language related to poor mental health.<\/p>\n<p>Detecting logic reasoning styles is a <a href=\"https:\/\/ebooks.iospress.nl\/volumearticle\/56629\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">new and promising area of research<\/a>. Formal logic is well established in mathematics and computer science, but to establish a machine logic for commonsense reasoning that would detect these fallacies is no small feat.<\/p>\n<p>Here\u2019s an example of our system thinking about a brief conversation that included a semantic fallacy mentioned earlier. Notice it first hypothesizes what <em>this<\/em> could refer to, based on its interactions with the user.<\/p>\n<figure class=\"align-center zoomable\" readability=\"6\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" alt=\"Our chatbots use a logic system in which a stream of \u2018thoughts\u2019 can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided\" width=\"600\" height=\"461\" class=\"js-lazy\" data-srcset=\"https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=461&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=461&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=461&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=580&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=580&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=580&amp;fit=crop&amp;dpr=3 2262w\"><noscript><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Our chatbots use a logic system in which a stream of \u2018thoughts\u2019 can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided\" width=\"600\" height=\"461\" class srcset=\"https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=461&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=461&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=461&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=580&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=580&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/429549\/original\/file-20211101-19-u942i8.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=580&amp;fit=crop&amp;dpr=3 2262w\"><\/noscript><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/news\/what-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Feditorial.thenextweb.com%2Fneural%2F2021%2F11%2F13%2Fwhat-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Our chatbots use a logic system in which a stream of \u2018thoughts\u2019 can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided\" data-title=\"Share Our chatbots use a logic system in which a stream of \u2018thoughts\u2019 can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Our chatbots use a logic system in which a stream of \u2018thoughts\u2019 can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Our chatbots use a logic system in which a stream of \u2018thoughts\u2019 can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>Although this technology still requires further research and development, it provides machines a necessary \u2014 albeit primitive \u2014 understanding of how words can relate to complex real-world scenarios (which is basically what semantics is about).<\/p>\n<p>And machines will need this capability if they are to ultimately address sensitive human affairs \u2014 first by detecting warning signs, and then delivering the appropriate response.<!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n<p><em>This article by <a href=\"https:\/\/theconversation.com\/profiles\/david-ireland-209154\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">David Ireland<\/a>, Senior Research Scientist at the Australian E-Health Research Centre., <a href=\"https:\/\/theconversation.com\/institutions\/csiro-1035\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">CSIRO<\/a> and <a href=\"https:\/\/theconversation.com\/profiles\/dana-kai-bradford-264008\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Dana Kai Bradford<\/a>, Principal Research Scientist, Australian eHealth Research Centre, <a href=\"https:\/\/theconversation.com\/institutions\/csiro-1035\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">CSIRO<\/a>, is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/we-studied-suicide-notes-to-learn-about-the-language-of-despair-and-were-training-ai-chatbots-to-do-the-same-169828\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">original article<\/a>.<\/em><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/what-100-suicide-notes-taught-us-about-creating-more-empathetic-chatbots-syndication\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>While the art of conversation in machines is limited, there are improvements with every iteration. As machines are developed to navigate complex conversations, there will be technical and ethical challenges in how&#8230;<\/p>\n","protected":false},"author":1,"featured_media":8926,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/8925"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8925"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/8925\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/8926"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}