
{"id":19139,"date":"2023-01-31T17:37:07","date_gmt":"2023-01-31T15:37:07","guid":{"rendered":"https:\/\/ticsalutsocial.atoom.space\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/"},"modified":"2023-05-03T09:19:38","modified_gmt":"2023-05-03T07:19:38","slug":"nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial","status":"publish","type":"noticia","link":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/","title":{"rendered":"New report on Explainability in Artificial Intelligence"},"content":{"rendered":"\n<p>The Artificial Intelligence team at the TIC Salut Social Foundation has published its Report on the Explainability of Artificial Intelligence in Health, within the framework of the Catalan Government\u2019s Health\/AI Programme. The document describes the&nbsp;<strong>benefits of using explainability tools<\/strong>&nbsp;in Artificial Intelligence. It sets out the main techniques used to explain&nbsp;<strong>algorithms based on digital medical imaging, tabular data and natural language processing<\/strong>, with the aim of supporting people involved in the development of Artificial Intelligence algorithms in the field of health.<\/p>\n\n\n\n<p>Explainable Artificial Intelligence allows human users to understand why an algorithm has produced a particular result. The main author of the report, and the head of the Artificial Intelligence Area of the TIC Salut Social Foundation,\u00a0<strong>Susanna Auss\u00f3<\/strong>, explains that\u00a0<strong>\u201cIt is essential for health professionals to understand the mechanisms by which the Artificial Intelligence tool has arrived at a prediction. This knowledge is essential to build users\u2019 trust, as it gives them the tools to verify whether the answer was based on robust clinical criteria. Explainability comes in various formats, and it is necessary to reach agreement with the experts on the most appropriate format in each case. They are normally very visual formats that may be combined depending on the needs.\u201d<\/strong>.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What can Explainable Artificial Intelligence contribute?<\/strong><\/h2>\n\n\n\n<p>The use of Artificial Intelligence in the field of health is constantly growing, due to the availability of electronic health records and the vast range of related data, as well as the great potential this technology has to improve people\u2019s health and well-being.<\/p>\n\n\n\n<p>Some health centres mainly use Artificial Intelligence to support diagnosis, prognosis and treatment of certain diseases. In fact,&nbsp;<a href=\"https:\/\/ticsalutsocial.atoom.space\/noticia\/lobservatori-dia-en-salut-identifica-prop-dun-centenar-dalgorismes-dintelligencia-artificial-als-centres-del-siscat-i-de-recerca-de-catalunya\/\" target=\"_blank\" rel=\"noreferrer noopener\">the Health AI Observatory has already detected nearly 100 Artificial Intelligence algorithms<\/a>&nbsp;that are in the development stage or are being used in a controlled manner.<\/p>\n\n\n\n<p>This technology is used as a decision-making support tool, as health care staff have the final say and make the final decision. However, it is important that&nbsp;<strong>this decision is taken with the knowledge provided by the explainability tools.<\/strong>&nbsp;Without these tools, Artificial Intelligence models are a kind of \u201cblack box\u201d that prevents us from understanding what is happening. This is the exact problem that Explainable Artificial Intelligence seeks to solve.<\/p>\n\n\n\n<p>So that it can explain the machine learning model in human terms,\u00a0<strong>Explainable Artificial Intelligence must respond to aspects related to the correctness, robustness, bias, improvement, transferability and human understanding of the model<\/strong>. This makes it possible to build professionals\u2019 trust, as they will be able to understand the limitations and difficulties, and simplify and connect them with easier concepts; to involve stakeholders to build an intuitive, understandable model; and to make better models by eliminating errors and identifying unfair scenarios caused by possible biases.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Taxonomy of Explainable Artificial Intelligence<\/strong><\/h2>\n\n\n\n<p>Faced with the lack of consensus on how to classify techniques that follow Explainable Artificial Intelligence models, the report describes different taxonomy models: intrinsic and post hoc explainability; global and local explainability; transparent and opaque models; and model-agnostic and model-dependent techniques.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Explainability of algorithms based on digital medical imaging, tabular data and natural language processing<\/strong><\/h2>\n\n\n\n<p>In three specific chapters, the report covers the different methods of explanation based on the source of the data. First, it sets out methods for explaining algorithms based on digital medical imaging, such as x-rays and magnetic resonance imaging. The main methods are CAM (class activation mapping), GRAD-CAM (gradient weighted class activation mapping), LRP (layered relevance propagation), LIME (locally interpretable model agnostic explanations ), and SHAP (Shapley additive explanations).<\/p>\n\n\n\n<p>Second, it describes the explainability of algorithms based on tabular data, i.e. variables that come from sources ranging from analytics, through omics data and vital constants, to hospital management data, among others. In this case, the techniques presented are PDP (partial dependence plot), ICE (individual conditional expectation), C-ICE (centred ICE), counterfactual explanations, LIME (locally interpretable model-agnostic explanations), anchors, and SHAP (Shapley additive explanations).<\/p>\n\n\n\n<p>Finally, the document addresses the explainability of algorithms based on natural language processing. This makes it possible, for example, to extract structured information from a free text report with diagnostic, treatment or monitoring data. The techniques specified for this type of explainability are SHAP (Shapley additive explanations), GbSA (gradient-based sensitivity analysis), LRP (layered relevance propagation), and LIME (locally interpretable model-agnostic explanations).<\/p>\n","protected":false},"author":12,"featured_media":20776,"menu_order":0,"template":"","meta":{"_acf_changed":false,"inline_featured_image":false},"etiqueta":[],"tipus":[719],"topic":[],"class_list":["post-19139","noticia","type-noticia","status-publish","has-post-thumbnail","hentry","tipus-artificial-intelligence"],"acf":{"finalitzat":false,"tipus_plantilla":"objectius","template_objectius":{"cont_principal":"","imatge":20776,"cont_seguent":{"titol":"","contingut":""},"seccio_llistat":{"titol":"","llistat":false},"objectius":{"titol":"","objectius":false},"documents":false},"autor":"","imatge":false,"textos_destacats":[{"texte":"Explainable Artificial Intelligence allows human users to understand why an algorithm has produced a particular result.<br \/>\r\n"}],"documents":[{"titol_doc":"Download the guide to explainability in Artificial Intelligence","document":{"ID":20896,"id":20896,"title":"EXPLICABILITAT_EN_Final","filename":"EXPLICABILITAT_EN_Final.pdf","filesize":3375036,"url":"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/EXPLICABILITAT_EN_Final.pdf","link":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/explicabilitat_en_final\/","alt":"","author":"12","description":"","caption":"","name":"explicabilitat_en_final","status":"inherit","uploaded_to":19139,"date":"2023-05-02 14:21:44","modified":"2023-05-02 14:21:44","menu_order":0,"mime_type":"application\/pdf","type":"application","subtype":"pdf","icon":"https:\/\/ticsalutsocial.atoom.space\/wp-includes\/images\/media\/document.png"}}]},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>New report on Explainability in Artificial Intelligence - Fundaci\u00f3 TIC Salut i Social<\/title>\n<meta name=\"description\" content=\"L&#039;explicabilitat en la Intel\u00b7lig\u00e8ncia Artificial permet a les persones usu\u00e0ries entendre per qu\u00e8 un algorisme ha donat un determinat resultat\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"New report on Explainability in Artificial Intelligence - Fundaci\u00f3 TIC Salut i Social\" \/>\n<meta property=\"og:description\" content=\"L&#039;explicabilitat en la Intel\u00b7lig\u00e8ncia Artificial permet a les persones usu\u00e0ries entendre per qu\u00e8 un algorisme ha donat un determinat resultat\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/\" \/>\n<meta property=\"og:site_name\" content=\"Fundaci\u00f3 TIC Salut i Social\" \/>\n<meta property=\"article:modified_time\" content=\"2023-05-03T07:19:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1920\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/\",\"url\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/\",\"name\":\"New report on Explainability in Artificial Intelligence - Fundaci\u00f3 TIC Salut i Social\",\"isPartOf\":{\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg\",\"datePublished\":\"2023-01-31T15:37:07+00:00\",\"dateModified\":\"2023-05-03T07:19:38+00:00\",\"description\":\"L'explicabilitat en la Intel\u00b7lig\u00e8ncia Artificial permet a les persones usu\u00e0ries entendre per qu\u00e8 un algorisme ha donat un determinat resultat\",\"breadcrumb\":{\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#primaryimage\",\"url\":\"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg\",\"contentUrl\":\"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg\",\"width\":2560,\"height\":1920,\"caption\":\"Informe sobre Explicabilitat de la Intel\u00b7lig\u00e8ncia Artificial en Salut\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Inici\",\"item\":\"https:\/\/ticsalutsocial.atoom.space\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"New report on Explainability in Artificial Intelligence\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ticsalutsocial.atoom.space\/en\/#website\",\"url\":\"https:\/\/ticsalutsocial.atoom.space\/en\/\",\"name\":\"Fundaci\u00f3 TIC Salut i Social\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ticsalutsocial.atoom.space\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"New report on Explainability in Artificial Intelligence - Fundaci\u00f3 TIC Salut i Social","description":"L'explicabilitat en la Intel\u00b7lig\u00e8ncia Artificial permet a les persones usu\u00e0ries entendre per qu\u00e8 un algorisme ha donat un determinat resultat","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/","og_locale":"en_US","og_type":"article","og_title":"New report on Explainability in Artificial Intelligence - Fundaci\u00f3 TIC Salut i Social","og_description":"L'explicabilitat en la Intel\u00b7lig\u00e8ncia Artificial permet a les persones usu\u00e0ries entendre per qu\u00e8 un algorisme ha donat un determinat resultat","og_url":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/","og_site_name":"Fundaci\u00f3 TIC Salut i Social","article_modified_time":"2023-05-03T07:19:38+00:00","og_image":[{"width":2560,"height":1920,"url":"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/","url":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/","name":"New report on Explainability in Artificial Intelligence - Fundaci\u00f3 TIC Salut i Social","isPartOf":{"@id":"https:\/\/ticsalutsocial.atoom.space\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#primaryimage"},"image":{"@id":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#primaryimage"},"thumbnailUrl":"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg","datePublished":"2023-01-31T15:37:07+00:00","dateModified":"2023-05-03T07:19:38+00:00","description":"L'explicabilitat en la Intel\u00b7lig\u00e8ncia Artificial permet a les persones usu\u00e0ries entendre per qu\u00e8 un algorisme ha donat un determinat resultat","breadcrumb":{"@id":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#primaryimage","url":"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg","contentUrl":"https:\/\/ticsalutsocial.atoom.space\/wp-content\/uploads\/2023\/01\/Informe-explicabilitat.jpg","width":2560,"height":1920,"caption":"Informe sobre Explicabilitat de la Intel\u00b7lig\u00e8ncia Artificial en Salut"},{"@type":"BreadcrumbList","@id":"https:\/\/ticsalutsocial.atoom.space\/en\/noticia\/nou-informe-sobre-lexplicabilitat-en-la-intelligencia-artificial\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Inici","item":"https:\/\/ticsalutsocial.atoom.space\/en\/"},{"@type":"ListItem","position":2,"name":"New report on Explainability in Artificial Intelligence"}]},{"@type":"WebSite","@id":"https:\/\/ticsalutsocial.atoom.space\/en\/#website","url":"https:\/\/ticsalutsocial.atoom.space\/en\/","name":"Fundaci\u00f3 TIC Salut i Social","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ticsalutsocial.atoom.space\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/noticia\/19139","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/noticia"}],"about":[{"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/types\/noticia"}],"author":[{"embeddable":true,"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/users\/12"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/media\/20776"}],"wp:attachment":[{"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/media?parent=19139"}],"wp:term":[{"taxonomy":"etiqueta","embeddable":true,"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/etiqueta?post=19139"},{"taxonomy":"tipus","embeddable":true,"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/tipus?post=19139"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/ticsalutsocial.atoom.space\/en\/wp-json\/wp\/v2\/topic?post=19139"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}