OAR@UM Community:
/library/oar/handle/123456789/5430
2025-12-26T03:34:41ZSigning with your hands full
/library/oar/handle/123456789/138761
Title: Signing with your hands full
Abstract: Weak drop, or the optional deletion of the weak hand in two-handed signs, is a natural
phenomenon occurring in a number of signed languages (Paligot, et al., 2016). In everyday
conversations, users of a signed language have both their dominant hand and their weak
hand at their disposal. However, situations, such as holding a drink, allow the use of only
one hand. In these cases, weak drop is not an option but a must. The present study focuses
on forced weak drop using elicited narratives, and the adaptations observed when one of
the hands is otherwise occupied. Each of the seven participants in this study was filmed
narrating a story from a children’s picture book first using both hands and then using one
hand whilst the other held a cup of water. The resulting fourteen renditions were analysed
and compared to other studies in the lilterature. Results show that many of the adaptations
observed in the Maltese Sign Language (LSM) are similar to those found in other studies
and that these adaptations are related to the modality rather than the language. It also
appears that iconicity may have an effect on the adaptation of two-handed LSM signs into
one-handed variants especially when weak drop is otherwise inhibited.
Description: M.A.(Melit.)2025-01-01T00:00:00ZMetaphorical minds : an investigation of large language models’ ability to adequately generate and process metaphors
/library/oar/handle/123456789/138760
Title: Metaphorical minds : an investigation of large language models’ ability to adequately generate and process metaphors
Abstract: Creativity in language is a qualitative feature that is claimed to be unique to humans
(Chomsky, 2006). Current Large Language Models (LLM), such as ChatGPT, appear to have
mastered the skill to use non-literal language, such as metaphors, and have therefore
supposedly crossed the threshold between machines and humans (Mei et al., 2024) when it
comes to mastering linguistic creativity. However, it is not clear how well LLMs can
understand and produce novel non-literal language compared to humans, which is what this
dissertation aims to investigate. This dissertation explores the role of creativity in human
language, covering classical and cognitive approaches in metaphor research, with a focus on
novel metaphors. It reviews key literature in both linguistic theory and natural language
processing, and presents a qualitative analysis of human- and machine-produced paraphrases
to showcase metaphor interpretation. Broader implications for scientific research are
discussed, particularly in comparing human and machine capacities for metaphorical
understanding.
Description: B.A. (Hons)(Melit.)2025-01-01T00:00:00ZA phonetic and phonological analysis of vowels in the dialect of Żebbuġ (Malta)
/library/oar/handle/123456789/138546
Title: A phonetic and phonological analysis of vowels in the dialect of Żebbuġ (Malta)
Abstract: This study aims to describe the vowels of the Maltese dialect of Ħaż-Żebbuġ,
otherwise known as Żebbuġi. This chapter starts by providing background information on the
town of Ħaż-Żebbuġ, specifically by focusing on its population, history, and the
sociolinguistic situation. The discussion of the sociolinguistic context looks into the social
divisions in the town and how these affect the locals’ use of language (Section 1.1). A section
which outlines the aims of the study (Section 1.2) follows, after which the transcription
conventions used in this dissertation are laid out (Section 1.3). The chapter ends with an
overview of this dissertation which outlines the main contents of each chapter (Section 1.4).
Description: B.A. (Hons)(Melit.)2025-01-01T00:00:00ZBERT for sentiment analysis of Japanese Twitter
/library/oar/handle/123456789/130108
Title: BERT for sentiment analysis of Japanese Twitter
Abstract: This publication introduces novel, open-source resources for sentiment analysis on Japanese Twitter. BERT for Japanese Twitter is a pre-trained model that is highly competent in the target domain and adaptable to a variety of tasks. Japanese Twitter Sentiment 1k (JTS1k) is a compact sentiment analysis dataset optimized for balance and reliability. This combination of pre-trained model and dataset was used to fine-tune a sentiment analysis model that broadly applies to Japanese social networking services (SNS): BERT for Japanese SNS Sentiment. The primary focus of this project is domain adaptation. Using an established Japanese BERT model as a foundation, domain adaptation was achieved by optimizing the vocabulary and continuing pre-training on a large Twitter corpus. Similar methodology was used to develop Twitter Multilingual RoBERTa (XLM-T) (Barbieri et al., 2022), which is the state-of-the-art multilingual Twitter model. By using a monolingual approach, this study developed a more efficient model that outperformed XLM-T in the target language. This project explored fundamental elements of corpus construction, corpus refinement, dataset annotation, preprocessing, pre-training, fine-tuning, and benchmarking. It concludes with a demonstration that the sentiment model is valid, useful, and sensitive to changes in public sentiment that correlate with real-world events.
Description: M.A.(Melit.)2024-01-01T00:00:00Z