OAR@UM Collection: /library/oar/handle/123456789/8369 2025-12-21T13:08:47Z 2025-12-21T13:08:47Z Multimodal data fusion for enhanced smart contract reputability analysis Malik, Cyrus Ellul, Joshua Bajada, Josef /library/oar/handle/123456789/141943 2025-12-04T14:04:21Z 2025-06-01T00:00:00Z Title: Multimodal data fusion for enhanced smart contract reputability analysis Authors: Malik, Cyrus; Ellul, Joshua; Bajada, Josef Abstract: The evaluation of smart contract reputability is essential to foster trust in decentralized ecosystems. However, existing methods that rely solely on static code analysis or transactional data, offer limited insight into evolving trustworthiness.We propose a multimodal data fusion framework that integrates static code features with transactional data to enhance reputability prediction. Our framework initially focuses on static code analysis, utilizing GAN-augmented opcode embeddings to address class imbalance, achieving 97.67% accuracy and a recall of 0.942 in detecting illicit contracts, surpassing traditional oversampling methods. This forms the crux of a reputability-centric fusion strategy, where combining static and transactional data improves recall by 7.25% over single-source models, demonstrating robust performance across validation sets. By providing a holistic view of smart contract behaviour, our approach enhances the model’s ability to assess reputability, identify fraudulent activities, and predict anomalous patterns. These capabilities contribute to more accurate reputability assessments, proactive risk mitigation, and enhanced blockchain security. 2025-06-01T00:00:00Z MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems Bugeja, Mark Bartolo, Matthias Montebello, Matthew Seychell, Dylan /library/oar/handle/123456789/141933 2025-12-04T10:32:46Z 2025-01-01T00:00:00Z Title: MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems Authors: Bugeja, Mark; Bartolo, Matthias; Montebello, Matthew; Seychell, Dylan Abstract: Traffic monitoring reduces congestion, improves safety, and supports environmental sustainability. Real-time flow tracking, anomaly detection, and efficient management are key. Convolutional Neural Networks (CNNs) have become integral due to their compact size and easy deployment. However, their effectiveness depends heavily on the quality of the input data, especially image resolution. With highresolution cameras, especially 4K, balancing image quality, detection accuracy, and system efficiency is critical. We propose the Multi-Resolution Traffic Monitoring Dataset (MRTMD), which captures transport scenes at resolutions ranging from 2160p to 360p. This dataset serves as a benchmark for standard object detection models, enabling the development of more efficient and cost-effective traffic monitoring solutions. MRTMD will be freely available on GitHub, offering a valuable resource for researchers and practitioners. We evaluate leading object detection models—YOLOv9, YOLOv8, YOLOv7, Faster R-CNN, FCOS, SSD, and RT-DETR—across varied resolutions. Our analysis focuses on mean Average Precision (mAP), recall, and processing time. We also assess the accuracy of Number Plate Recognition (NPR) for tasks that require fine-grained detail extraction. Our findings show that detection performance typically varies within ±0.01 to ±0.03 in mAP and recall across resolutions, suggesting higher resolutions are not always advantageous. However, they remain crucial for tasks like NPR. The multi-resolution dataset enables a comprehensive evaluation of the trade-off between image quality and task performance. Ultimately, our analysis highlights the importance of resolution selection in large-scale deployments, informing system designers and policymakers. This dataset is a vital tool for balancing performance, cost, and practical constraints in real-world traffic monitoring. 2025-01-01T00:00:00Z Advancing experiential learning through generative AI-powered virtual reality Borg, Gabriel Azzopardi, Keith Cini, Karl Cardona, Luke Caruana, Richard Camilleri, Vanessa Seychell, Dylan Montebello, Matthew /library/oar/handle/123456789/141907 2025-12-03T14:02:55Z 2025-01-01T00:00:00Z Title: Advancing experiential learning through generative AI-powered virtual reality Authors: Borg, Gabriel; Azzopardi, Keith; Cini, Karl; Cardona, Luke; Caruana, Richard; Camilleri, Vanessa; Seychell, Dylan; Montebello, Matthew Abstract: The accelerating complexity of professional practice requires higher education institutions to adopt innovative pedagogical approaches that bridge knowledge acquisition and authentic skills application. This paper presents the WAVE project, an educational innovation that integrates Generative Artificial Intelligence (AI) with Virtual Reality (VR) to create adaptive, immersive training environments for water-rescue education. Designed as a proof-of-concept, WAVE addresses key limitations of traditional training including limited scenario variability, resource constraints, and safety risks by leveraging Generative AI to dynamically construct diverse, context-rich emergency situations. Central to WAVE’s design is a generative scenario engine that produces highly realistic virtual environments and variable rescue challenges, adapting to learner profiles, competencies, and progression. The system captures real-time performance data, such as decision-making, response time, and physiological indicators and uses these inputs to personalise the learning pathway, ensuring that each training session evolves according to individual needs and skill development goals. This continuous adaptation supports experiential learning by exposing trainees to an extensive range of lifelike scenarios that would be impractical or unsafe to reproduce physically. The paper outlines the instructional design framework guiding the development of WAVE, with particular attention to how Generative AI enhances experiential learning, reflective practice, and mastery of critical decision-making. Preliminary pilot studies involving water-rescue trainees demonstrate promising outcomes, including increased situational awareness, improved procedural accuracy, and heightened learner engagement. Furthermore, participants report strong perceptions of realism, relevance, and motivation, highlighting the system’s potential to foster deeper learning. Beyond its immediate application to water-rescue training, WAVE offers broader implications for higher education. The modular architecture and adaptive capabilities of Generative AI-powered VR can be extended to various disciplines requiring complex skill acquisition, including healthcare, engineering, crisis management, and teacher education. The paper concludes by discussing scalability, ethical considerations in AI-generated training content, and the essential role of human oversight to ensure pedagogical soundness and learner well-being. This contribution aims to stimulate dialogue on how Generative AI and VR can reshape experiential learning in higher education, offering scalable, safe, and personalised alternatives to traditional skills training. 2025-01-01T00:00:00Z Exploring the educational frontier : a deep dive into VR and AI-enhanced learning environments Saini, Akash Kumar Montebello, Matthew /library/oar/handle/123456789/141902 2025-12-03T13:44:33Z 2024-01-01T00:00:00Z Title: Exploring the educational frontier : a deep dive into VR and AI-enhanced learning environments Authors: Saini, Akash Kumar; Montebello, Matthew Abstract: Virtual Reality (VR) and Artificial Intelligence (AI) have the potential to revolutionize the way we approach education. VR technology creates immersive, computer-generated environments that simulate real-world scenarios, allowing students to engage with content more interactively and engagingly. AI, on the other hand, can personalize instruction, provide real-time feedback, and adapt to the learning needs of individual students through learning analytics. This chapter focuses on exploiting the potential of VR and AI while highlighting the benefits, challenges, and ethical considerations of such a medium, as well as accessibility and inclusivity concerns. 2024-01-01T00:00:00Z