Ключевое слово: «transformers»

Dmitriev N. N. MACHINE TRANSLATION: A SURVEY // Научно-методический электронный журнал «Концепт». – 2026. – . – URL: http://e-koncept.ru/2026/0.htm
Machine Translation (MT) has evolved from rigid rule-based systems to statistical methods and finally to neural architectures. While Supervised Neural Machine Translation (NMT) achieves high performance, it depends heavily on large parallel corpora. This survey reviews the techni- cal evolution of MT state-of-the-art (SOTA) approaches and analyzes Unsupervised Machine Translation (UMT). We examine how modern models learn translation mappings using only monolingual data, de- coupling performance from the availability of bilingual datasets.
Baisheva M. G. A REVIEW OF SOFTWARE SOLUTIONS FOR NATURAL LANGUAGE PROCESSING IN LOW-RESOURCE LANGUAGES: A COMPARATIVE ANALYSIS USING THE EXAMPLES OF THE YAKUT AND BURYAT LANGUAGES // Научно-методический электронный журнал «Концепт». – 2026. – . – URL: http://e-koncept.ru/2026/0.htm
This article discusses modern software solutions for natural language processing (NLP) with limited resources and a shortage of text data and tools. The relevance of this topic is due to the need to integrate these languages into the digital space to maintain cultural diversity. Machine learning models, including homogeneous transformers and machine translation system, are analyzed using examples of Yakut or Buryat languages. The results show that there are significant differences in the effectiveness of the methods based on the amount of data and the specific situation of the language. The main limitation of the study is the focus on two languages, which narrows the generality of the conclusions. Nevertheless, this work helped to develop methods for further training models of Russian regional languages and emphasized the importance of transferring learning from related languages to overcome the lack of data.