{"id":12341,"date":"2024-03-22T09:59:02","date_gmt":"2024-03-22T06:59:02","guid":{"rendered":"https:\/\/umram.bilkent.edu.tr\/?p=12341"},"modified":"2024-03-22T09:59:02","modified_gmt":"2024-03-22T06:59:02","slug":"graph-receptive-transformer-encoder-for-text-classification-2","status":"publish","type":"post","link":"https:\/\/umram.bilkent.edu.tr\/index.php\/tr\/2024\/03\/22\/graph-receptive-transformer-encoder-for-text-classification-2\/","title":{"rendered":"Graph Receptive Transformer Encoder for Text Classification"},"content":{"rendered":"<p>IEEE Transactions on Signal and Information Processing over Networks dergisinde yay\u0131nlanan &#8220;Graph Reseptive Transformer Encoder (GRTE) for Text Classification&#8221; ba\u015fl\u0131kl\u0131 en son makalemizi payla\u015fmaktan mutluluk duyuyoruz!<\/p>\n<p>Yeni yakla\u015f\u0131m\u0131m\u0131z, metin s\u0131n\u0131fland\u0131rmas\u0131 i\u00e7in d\u00f6n\u00fc\u015ft\u00fcr\u00fcc\u00fclerin dikkat mekanizmalar\u0131ndaki s\u0131n\u0131rlamalar\u0131 ele almak amac\u0131yla grafik sinir a\u011flar\u0131n\u0131 (GNN&#8217;ler) b\u00fcy\u00fck \u00f6l\u00e7ekli \u00f6nceden e\u011fitilmi\u015f modellerle birle\u015ftirir. GRTE, metinleri grafikler halinde temsil ederek k\u00fcresel ve ba\u011flamsal bilgileri al\u0131r, son teknoloji modellere k\u0131yasla \u00f6nemli performans iyile\u015ftirmeleri ve ~100 kata kadar hesaplama tasarrufu sunar.<\/p>\n<p>&nbsp;<\/p>\n<p>Daha fazla ayr\u0131nt\u0131 i\u00e7in makaleye g\u00f6z at\u0131n!<\/p>\n<p>#NLP #TextClassification #Transformer #GNN #GRTE #TSIPN #IEEE<\/p>\n<p>&nbsp;<\/p>\n<p>Makale: <a href=\"https:\/\/ieeexplore.ieee.org\/document\/10477516\">https:\/\/ieeexplore.ieee.org\/document\/10477516<\/a><\/p>\n<p>Kod: <a href=\"https:\/\/github.com\/koc-lab\/grte\">https:\/\/github.com\/koc-lab\/grte<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><strong>\u00d6zet:<\/strong><\/p>\n<p>By employing attention mechanisms, transformers have made great improvements in nearly all NLP tasks, including text classification. However, the context of the transformer\u2019s attention mechanism is limited to single sequences, and their fine-tuning stage can utilize only inductive learning. Focusing on broader contexts by representing texts as graphs, previous works have generalized transformer models to graph domains to employ attention mechanisms beyond single sequences. However, these approaches either require exhaustive pre-training stages, learn only transductively, or can learn inductively without utilizing pre-trained models. To address these problems simultaneously, we propose the Graph Receptive Transformer Encoder (GRTE), which combines graph neural networks (GNNs) with large-scale pre-trained models for text classification in both inductive and transductive fashions. By constructing heterogeneous and homogeneous graphs over given corpora and not requiring a pre-training stage, GRTE can utilize information from both large-scale pre-trained models and graph-structured relations. Our proposed method retrieves global and contextual information in documents and generates word embeddings as a by-product of inductive inference. We compared the proposed GRTE with a wide range of baseline models through comprehensive experiments. Compared to the state-of-the-art, we demonstrated that GRTE improves model performances and offers computational savings up to ~100x.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>IEEE Transactions on Signal and Information Processing over Networks dergisinde yay\u0131nlanan &#8220;Graph Reseptive Transformer Encoder (GRTE) for Text Classification&#8221; ba\u015fl\u0131kl\u0131 en son makalemizi payla\u015fmaktan mutluluk duyuyoruz! Yeni yakla\u015f\u0131m\u0131m\u0131z, metin s\u0131n\u0131fland\u0131rmas\u0131 i\u00e7in d\u00f6n\u00fc\u015ft\u00fcr\u00fcc\u00fclerin dikkat mekanizmalar\u0131ndaki s\u0131n\u0131rlamalar\u0131 ele almak amac\u0131yla grafik sinir a\u011flar\u0131n\u0131 (GNN&#8217;ler) b\u00fcy\u00fck \u00f6l\u00e7ekli \u00f6nceden e\u011fitilmi\u015f modellerle birle\u015ftirir. GRTE, metinleri grafikler halinde temsil ederek k\u00fcresel [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":12338,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[445],"tags":[],"_links":{"self":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts\/12341"}],"collection":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/comments?post=12341"}],"version-history":[{"count":1,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts\/12341\/revisions"}],"predecessor-version":[{"id":12342,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/posts\/12341\/revisions\/12342"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/media\/12338"}],"wp:attachment":[{"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/media?parent=12341"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/categories?post=12341"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/umram.bilkent.edu.tr\/index.php\/wp-json\/wp\/v2\/tags?post=12341"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}