OAR@UM Collection:/library/oar/handle/123456789/83692025-12-21T13:08:47Z2025-12-21T13:08:47ZMultimodal data fusion for enhanced smart contract reputability analysisMalik, CyrusEllul, JoshuaBajada, Josef/library/oar/handle/123456789/1419432025-12-04T14:04:21Z2025-06-01T00:00:00ZTitle: Multimodal data fusion for enhanced smart contract reputability analysis
Authors: Malik, Cyrus; Ellul, Joshua; Bajada, Josef
Abstract: The evaluation of smart contract reputability is
essential to foster trust in decentralized ecosystems. However,
existing methods that rely solely on static code analysis
or transactional data, offer limited insight into evolving
trustworthiness.We propose a multimodal data fusion framework
that integrates static code features with transactional data
to enhance reputability prediction. Our framework initially
focuses on static code analysis, utilizing GAN-augmented opcode
embeddings to address class imbalance, achieving 97.67%
accuracy and a recall of 0.942 in detecting illicit contracts,
surpassing traditional oversampling methods. This forms the
crux of a reputability-centric fusion strategy, where combining
static and transactional data improves recall by 7.25% over
single-source models, demonstrating robust performance across
validation sets. By providing a holistic view of smart contract
behaviour, our approach enhances the model’s ability to
assess reputability, identify fraudulent activities, and predict
anomalous patterns. These capabilities contribute to more
accurate reputability assessments, proactive risk mitigation, and
enhanced blockchain security.2025-06-01T00:00:00ZMRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systemsBugeja, MarkBartolo, MatthiasMontebello, MatthewSeychell, Dylan/library/oar/handle/123456789/1419332025-12-04T10:32:46Z2025-01-01T00:00:00ZTitle: MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems
Authors: Bugeja, Mark; Bartolo, Matthias; Montebello, Matthew; Seychell, Dylan
Abstract: Traffic monitoring reduces congestion, improves safety, and supports environmental
sustainability. Real-time flow tracking, anomaly detection, and efficient management are key. Convolutional
Neural Networks (CNNs) have become integral due to their compact size and easy deployment. However,
their effectiveness depends heavily on the quality of the input data, especially image resolution. With highresolution
cameras, especially 4K, balancing image quality, detection accuracy, and system efficiency is
critical. We propose the Multi-Resolution Traffic Monitoring Dataset (MRTMD), which captures transport
scenes at resolutions ranging from 2160p to 360p. This dataset serves as a benchmark for standard object
detection models, enabling the development of more efficient and cost-effective traffic monitoring solutions.
MRTMD will be freely available on GitHub, offering a valuable resource for researchers and practitioners.
We evaluate leading object detection models—YOLOv9, YOLOv8, YOLOv7, Faster R-CNN, FCOS,
SSD, and RT-DETR—across varied resolutions. Our analysis focuses on mean Average Precision (mAP),
recall, and processing time. We also assess the accuracy of Number Plate Recognition (NPR) for tasks
that require fine-grained detail extraction. Our findings show that detection performance typically varies
within ±0.01 to ±0.03 in mAP and recall across resolutions, suggesting higher resolutions are not always
advantageous. However, they remain crucial for tasks like NPR. The multi-resolution dataset enables a
comprehensive evaluation of the trade-off between image quality and task performance. Ultimately, our
analysis highlights the importance of resolution selection in large-scale deployments, informing system
designers and policymakers. This dataset is a vital tool for balancing performance, cost, and practical
constraints in real-world traffic monitoring.2025-01-01T00:00:00ZAdvancing experiential learning through generative AI-powered virtual realityBorg, GabrielAzzopardi, KeithCini, KarlCardona, LukeCaruana, RichardCamilleri, VanessaSeychell, DylanMontebello, Matthew/library/oar/handle/123456789/1419072025-12-03T14:02:55Z2025-01-01T00:00:00ZTitle: Advancing experiential learning through generative AI-powered virtual reality
Authors: Borg, Gabriel; Azzopardi, Keith; Cini, Karl; Cardona, Luke; Caruana, Richard; Camilleri, Vanessa; Seychell, Dylan; Montebello, Matthew
Abstract: The accelerating complexity of professional practice requires higher education institutions to adopt
innovative pedagogical approaches that bridge knowledge acquisition and authentic skills application.
This paper presents the WAVE project, an educational innovation that integrates Generative Artificial
Intelligence (AI) with Virtual Reality (VR) to create adaptive, immersive training environments for water-rescue education. Designed as a proof-of-concept, WAVE addresses key limitations of traditional
training including limited scenario variability, resource constraints, and safety risks by leveraging
Generative AI to dynamically construct diverse, context-rich emergency situations. Central to WAVE’s
design is a generative scenario engine that produces highly realistic virtual environments and variable
rescue challenges, adapting to learner profiles, competencies, and progression. The system captures
real-time performance data, such as decision-making, response time, and physiological indicators and
uses these inputs to personalise the learning pathway, ensuring that each training session evolves
according to individual needs and skill development goals. This continuous adaptation supports
experiential learning by exposing trainees to an extensive range of lifelike scenarios that would be
impractical or unsafe to reproduce physically. The paper outlines the instructional design framework
guiding the development of WAVE, with particular attention to how Generative AI enhances experiential
learning, reflective practice, and mastery of critical decision-making. Preliminary pilot studies involving
water-rescue trainees demonstrate promising outcomes, including increased situational awareness,
improved procedural accuracy, and heightened learner engagement. Furthermore, participants report
strong perceptions of realism, relevance, and motivation, highlighting the system’s potential to foster
deeper learning. Beyond its immediate application to water-rescue training, WAVE offers broader
implications for higher education. The modular architecture and adaptive capabilities of Generative AI-powered VR can be extended to various disciplines requiring complex skill acquisition, including
healthcare, engineering, crisis management, and teacher education. The paper concludes by discussing
scalability, ethical considerations in AI-generated training content, and the essential role of human
oversight to ensure pedagogical soundness and learner well-being. This contribution aims to stimulate
dialogue on how Generative AI and VR can reshape experiential learning in higher education, offering
scalable, safe, and personalised alternatives to traditional skills training.2025-01-01T00:00:00ZExploring the educational frontier : a deep dive into VR and AI-enhanced learning environmentsSaini, Akash KumarMontebello, Matthew/library/oar/handle/123456789/1419022025-12-03T13:44:33Z2024-01-01T00:00:00ZTitle: Exploring the educational frontier : a deep dive into VR and AI-enhanced learning environments
Authors: Saini, Akash Kumar; Montebello, Matthew
Abstract: Virtual Reality (VR) and Artificial Intelligence (AI) have the potential to revolutionize the way we approach education. VR technology creates immersive, computer-generated environments that simulate real-world scenarios, allowing students to engage with content more interactively and engagingly. AI, on the other hand, can personalize instruction, provide real-time feedback, and adapt to the learning needs of individual students through learning analytics. This chapter focuses on exploiting the potential of VR and AI while highlighting the benefits, challenges, and ethical considerations of such a medium, as well as accessibility and inclusivity concerns.2024-01-01T00:00:00Z