Кафедра "Інтелектуальні комп'ютерні системи"

Постійне посилання колекціїhttps://repository.kpi.kharkov.ua/handle/KhPI-Press/2423

Офіційний сайт кафедри http://web.kpi.kharkov.ua/iks

Кафедра "Інтелектуальні комп’ютерні системи" заснована 12 лютого 2007 року на базі спеціальності "Прикладна лінгвістика".

У 2009 році на базі кафедри спільно з Українським мовно-інформаційним фондом НАН України було створено Науково-дослідний центр інтелектуальних систем і комп’ютерної лінгвістики.

Кафедра входить до складу Навчально-наукового інституту соціально-гуманітарних технологій Національного технічного університету "Харківський політехнічний інститут".

У складі науково-педагогічного колективу кафедри працюють: 2 доктора технічних наук, 5 кандидатів філологічних наук, 4 кандидата технічних наук, 1 кандидат філософських наук; 2 співробітника мають звання професора, 3 – доцента.

Переглянути

Результати пошуку

Зараз показуємо 1 - 7 з 7
  • Ескіз
    Документ
    Open Information Extraction as Additional Source for Kazakh Ontology Generation
    (2020) Khairova, N. F.; Petrasova, S. V.; Mamyrbayev, Orken; Mukhsina, Kuralay
    Nowadays, structured information that obtains from unstructured texts and Web context can be applied as an additional source of knowledge to create ontologies. In order to extract information from a text and represent it in the RDF-triplets format, we suggest using the Open Information Extraction model. Then we consider the adaptation of the model to fact extraction from unstructured texts in the Kazakh language. In our approach, we identify lexical units that name the participants of the action (the Subject and Object) and semantic relations between them based on words characteristics in a sentence. The model provides semantic functions of the action participants via logical-linguistic equations that express the relations of the grammatical and semantic characteristics of the words in a Kazakh sentence. Using the tag names and some syntactic characteristics of words in the Kazakh sentences as the values of the predicate variables in corresponding equations allows us to extract Subjects, Objects and Predicates of facts from texts of Web content. The experimental research dataset includes texts extracted from Kazakh bilingual news websites. The experiment shows that we can achieve the precision of facts extraction over 71% for Kazakh corpus.
  • Ескіз
    Документ
    Detecting Collocations Similarity via Logical-Linguistic Model
    (Association for Computational Linguistics, USA, 2019) Khairova, N. F.; Petrasova, S. V.; Mamyrbayev, Orken; Mukhsina, Kuralay
    Semantic similarity between collocations, along with words similarity, is one of the main issues of NLP. In particular, it might be addressed to facilitate the automatic thesaurus generation. In the paper, we consider the logical-linguistic model that allows defining the relation of semantic similarity of collocations via the logical-algebraic equations. We provide the model for English, Ukrainian and Russian text corpora. The implementation for each language is slightly different in the equations of the finite predicates algebra and used linguistic resources. As a dataset for our experiment, we use 5801 pairs of sentences of Microsoft Research Paraphrase Corpus for English and more than 1000 texts of scientific papers for Russian and Ukrainian.
  • Ескіз
    Документ
    The Influence of Various Text Characteristics on the Readability and Content Informativeness
    (2019) Khairova, N. F.; Kolesnyk, Anastasiia; Mamyrbayev, Orken; Mukhsina, Kuralay
    Currently, businesses increasingly use various external big data sources for extracting and integrating information into their own enterprise information systems to make correct economic decisions, to understand customer needs, and to predict risks. The necessary condition for obtaining useful knowledge from big data is analysing high-quality data and using quality textual data. In the study, we focus on the influence of readability and some particular features of the texts written for a global audience on the texts quality assessment. In order to estimate the influence of different linguistic and statistical factors on the text readability, we reviewed five different text corpora. Two of them contain texts from Wikipedia, the third one contains texts from Simple Wikipedia and two last corpora include scientific and educational texts. We show linguistic and statistical features of a text that have the greatest influence on the text quality for business corporations. Finally, we propose some directions on the way to automatic predicting the readability of texts in the Web.
  • Ескіз
    Документ
    The aligned Kazakh-Russian parallel corpus focused on the criminal theme
    (2019) Khairova, N. F.; Kolesnyk, Anastasiia; Mamyrbayev, Orken; Mukhsina, Kuralay
    Nowadays, the development of high-quality parallel aligned text corpora is one of the most relevant and advanced directions of modern linguistics. Special emphasis is placed in creating parallel multilingual corpora for low resourced languages, such as the Kazakh language. In the study, we explored texts from four Kazakh bilingual news websites and created the parallel Kazakh-Russian corpus of texts that focus on the criminal subject at their base. In order to align the corpus, we used lexical compliances set and the values of POS-tagging of both languages. 60% of our corpus sentences are automatically aligned correctly. Finally, we analyzed the factors affecting the percentage of errors.
  • Ескіз
    Документ
    Automatic Extraction of Synonymous Collocation Pairs from a Text Corpus
    (Polskie Towarzystwo Informatyczne, Poland, 2018) Khairova, N. F.; Petrasova, S. V.; Lewoniewski, Włodzimierz; Mamyrbayev, Orken; Mukhsina, Kuralay
    Automatic extraction of synonymous collocation pairs from text corpora is a challenging task of NLP. In order to search collocations of similar meaning in English texts, we use logical-algebraic equations. These equations combine grammatical and semantic characteristics of words of substantive, attributive and verbal collocations types. With Stanford POS tagger and Stanford Universal Dependencies parser, we identify the grammatical characteristics of words. We exploit WordNet synsets to pick synonymous words of collocations. The potential synonymous word combinations found are checked for compliance with grammatical and semantic characteristics of the proposed logical-linguistic equations. Our dataset includes more than half a million Wikipedia articles from a few portals. The experiment shows that the more frequent synonymous collocations occur in texts, the more related topics of the texts might be. The precision of synonymous collocations search in our experiment has achieved the results close to other studies like ours.
  • Ескіз
    Документ
    Logical-linguistic model for multilingual Open Information Extraction
    (2020) Khairova, N. F.; Mamyrbayev, Orken; Mukhsina, Kuralay; Kolesnyk, Anastasiia
    Open Information Extraction (OIE) is a modern strategy to extract the triplet of facts from Web-document collections. However, most part of the current OIE approaches is based on NLP techniques such as POS tagging and dependency parsing, which tools are accessible not to all languages. In this paper, we suggest the logical-linguistic model, which basic mathematical means are logical-algebraic equations of finite predicates algebra. These equations allow expressing a semantic role of the participant of a triplet of the fact (Subject-Predicate-Object) due to the relations of grammatical characteristics of words in the sentence. We propose the model that extracts the unlimited domain-independent number of facts from sentences of different languages. The use of our model allows extracting the facts from unstructured texts without requiring a pre-specified vocabulary, by identifying relations in phrases and associated arguments in arbitrary sentences of English, Kazakh, and Russian languages. We evaluate our approach on corpora of three languages based on English and Kazakh bilingual news websites. We achieve the precision of facts extraction over 87% for English corpus, over 82% for Russian corpus and 71% for Kazakh corpus.
  • Ескіз
    Документ
    Similar Text Fragments Extraction for Identifying Common Wikipedia Communities
    (MDPI AG, Switzerland, 2018) Petrasova, S. V.; Khairova, N. F.; Lewoniewski, Włodzimierz; Mamyrbayev, Orken; Mukhsina, Kuralay
    Similar text fragments extraction from weakly formalized data is the task of natural language processing and intelligent data analysis and is used for solving the problem of automatic identification of connected knowledge fields. In order to search such common communities in Wikipedia, we propose to use as an additional stage a logical-algebraic model for similar collocations extraction. With Stanford Part-Of-Speech tagger and Stanford Universal Dependencies parser, we identify the grammatical characteristics of collocation words. WithWordNet synsets, we choose their synonyms. Our dataset includes Wikipedia articles from different portals and projects. The experimental results show the frequencies of synonymous text fragments inWikipedia articles that form common information spaces. The number of highly frequented synonymous collocations can obtain an indication of key common up-to-date Wikipedia communities.