{"id":10909,"date":"2020-06-30T10:25:24","date_gmt":"2020-06-30T07:25:24","guid":{"rendered":"https:\/\/umram.bilkent.edu.tr\/?p=10909"},"modified":"2020-06-30T14:39:25","modified_gmt":"2020-06-30T11:39:25","slug":"imparting-interpretability-to-word-embeddings-while-preserving-semantic-structure","status":"publish","type":"post","link":"https:\/\/umram.bilkent.edu.tr\/index.php\/2020\/06\/30\/imparting-interpretability-to-word-embeddings-while-preserving-semantic-structure\/","title":{"rendered":"Imparting interpretability to word embeddings while preserving semantic structure"},"content":{"rendered":"<section class=\"kc-elm kc-css-103047 kc_row\"><div class=\"kc-row-container  kc-container\"><div class=\"kc-wrap-columns\"><div class=\"kc-elm kc-css-926667 kc_column kc_col-sm-12\"><div class=\"kc-col-container\"><div class=\"kc-elm kc-css-510185 kc_text_block\"><p>Work of Koc Lab on natural language processing has resulted in an <a href=\"https:\/\/www.cambridge.org\/core\/journals\/natural-language-engineering\/article\/imparting-interpretability-to-word-embeddings-while-preserving-semantic-structure\/D2463D4AC2456F5E988BB06869480BCB\">article<\/a> entitled \u201cImparting interpretability to word embeddings while preserving semantic structure\" published in Natural Language Engineering Journal of the Cambridge University Press. Authors of the article are <a href=\"https:\/\/umram.bilkent.edu.tr\/index.php\/teams\/lutfi-kerem-senel\/\">L\u00fcfti Kerem \u015eenel<\/a>, \u0130hsan Utlu, <a href=\"https:\/\/umram.bilkent.edu.tr\/index.php\/teams\/furkan-sahinuc\/\">Furkan \u015eahinu\u00e7<\/a>, Haldun \u00d6zakta\u015f and <a href=\"https:\/\/umram.bilkent.edu.tr\/index.php\/teams\/aykut-koc\/\">Aykut Ko\u00e7<\/a>.<\/p>\n<p>Word embeddings are the crucial tools in many NLP tasks. However, their black-box nature restrains word embeddings from being explainable and interpretable. In this study,\u00a0 for the first time, by adding an external objective function to the conventional GloVe algorithm, words embeddings are given the feature of being interpretable. Furthermore, joint learning of interpretable word embeddings does not distort the underlying semantic structure of word embeddings.<\/p>\n<p>The increase in the degree of interpretability within the word embeddings is shown by manual evaluations of human subjects as well as by qualitative and quantitative tests. Obtained results show that imparted word embeddings outperform similar studies in the literature.<\/p>\n<p>Contribution of the study is essential to enhance explainable AI algorithms in the NLP field.<\/p>\n<p><span style=\"font-size: 14pt;\"><strong>Abstract\u00a0<\/strong><\/span><\/p>\n<p>As a ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words, but the vectors corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related to a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget\u2019s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. Manual human evaluation results have also been presented to further verify that the proposed method increases interpretability. We also demonstrate the preservation of semantic coherence of the resulting vector space using word-analogy\/word-similarity tests and a downstream task. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.<\/p>\n<\/div><\/div><\/div><\/div><\/div><\/section>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":7,"featured_media":10914,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[256],"tags":[],"_links":{"self":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts\/10909"}],"collection":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/comments?post=10909"}],"version-history":[{"count":8,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts\/10909\/revisions"}],"predecessor-version":[{"id":10921,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts\/10909\/revisions\/10921"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/media\/10914"}],"wp:attachment":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/media?parent=10909"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/categories?post=10909"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/tags?post=10909"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}