Sökning: "natural language processing NLP"
Visar resultat 16 - 20 av 23 avhandlingar innehållade orden natural language processing NLP.
16. Attacking and Defending the Privacy of Clinical Language Models
Sammanfattning : The state-of-the-art methods in natural language processing (NLP) increasingly rely on large pre-trained transformer models. The strength of the models stems from their large number of parameters and the enormous amounts of data used to train them. The datasets are of a scale that makes it difficult, if not impossible, to audit them manually. LÄS MER
17. Segmenting and Tagging Text with Neural Networks
Sammanfattning : Segmentation and tagging of text are important preprocessing steps for higher-level natural language processing tasks. In this thesis, we apply a sequence labelling framework based on neural networks to various segmentation and tagging tasks, including sentence segmentation, word segmentation, morpheme segmentation, joint word segmentation and part-of-speech tagging, and named entity transliteration. LÄS MER
18. Understanding Large Language Models : Towards Rigorous and Targeted Interpretability Using Probing Classifiers and Self-Rationalisation
Sammanfattning : Large language models (LLMs) have become the base of many natural language processing (NLP) systems due to their performance and easy adaptability to various tasks. However, much about their inner workings is still unknown. LÄS MER
19. Faster and More Resource-Efficient Intent Classification
Sammanfattning : Intent classification is known to be a complex problem in Natural Language Processing (NLP) research. This problem represents one of the stepping stones to obtain machines that can understand our language. Several different models recently appeared to tackle the problem. The solution has become reachable with deep learning models. LÄS MER
20. Vector Representations of Idioms in Data-Driven Chatbots for Robust Assistance
Sammanfattning : This thesis presents resources capable of enhancing solutions of some Natural Language Processing (NLP) tasks, demonstrates the learning of abstractions by deep models through cross-lingual transferability, and shows how deep learning models trained on idioms can enhance open-domain conversational systems. The challenges of open-domain conversational systems are many and include bland repetitive utterances, lack of utterance diversity, lack of training data for low-resource languages, shallow world-knowledge and non-empathetic responses, among others. LÄS MER