Sökning: "large language models"
Visar resultat 1 - 5 av 114 avhandlingar innehållade orden large language models.
1. Understanding Large Language Models : Towards Rigorous and Targeted Interpretability Using Probing Classifiers and Self-Rationalisation
Sammanfattning : Large language models (LLMs) have become the base of many natural language processing (NLP) systems due to their performance and easy adaptability to various tasks. However, much about their inner workings is still unknown. LÄS MER
2. Natural Language Processing for Low-resourced Code-switched Colloquial Languages – The Case of Algerian Language
Sammanfattning : In this thesis we explore to what extent deep neural networks (DNNs), trained end-to-end, can be used to perform natural language processing tasks for code-switched colloquial languages lacking both large automated data and processing tools, for instance tokenisers, morpho-syntactic and semantic parsers, etc. We opt for an end-to-end learning approach because this kind of data is hard to control due to its high orthographic and linguistic variability. LÄS MER
3. The Virtual Language Teacher : Models and applications for language learning using embodied conversational agents
Sammanfattning : This thesis presents a framework for computer assisted language learning using a virtual language teacher. It is an attempt at creating, not only a new type of language learning software, but also a server-based application that collects large amounts of speech material for future research purposes. LÄS MER
4. The Search for Syntax : Investigating the Syntactic Knowledge of Neural Language Models Through the Lens of Dependency Parsing
Sammanfattning : Syntax — the study of the hierarchical structure of language — has long featured as a prominent research topic in the field of natural language processing (NLP). Traditionally, its role in NLP was confined towards developing parsers: supervised algorithms tasked with predicting the structure of utterances (often for use in downstream applications). LÄS MER
5. Representation learning for natural language
Sammanfattning : Artificial neural networks have obtained astonishing results in a diverse number of tasks. One of the reasons for the success is their ability to learn the whole task at once (endto-end learning), including the representations for data. LÄS MER