OAR@UM Collection:
/library/oar/handle/123456789/53949
2025-11-05T06:06:58ZD2D cooperation for efficient transmission of live multi-view video in LTE-A cellular networks
/library/oar/handle/123456789/55267
Title: D2D cooperation for efficient transmission of live multi-view video in LTE-A cellular networks
Abstract: The demand for video content over cellular networks is growing at a monumental scale. The inception of the fourth generation broadband cellular network technology (4G) and Long Term Evolution (LTE) broadband cellular network technology have enabled unabated mobile video adoption. However, video traffic is causing considerable load on the already-limited network resources. This is posing a new landscape of challenges to service and network providers in distributing bandwidthhungry services such as Multi-View Video (MVV). A paradigm to circumvent this challenge would be to allow direct communication between closely located User Equipment (UE) without data routing via the radio and core network. Device-toDevice (D2D) capable networks promise scalability and performance improvements in utilising scarce resources. In this work, a mode selection strategy at the Base Station (BS) is being proposed. It takes into account segment availability information at the UE to decide whether the Mobile User (MU) should receive dedicated resources via traditional Content Delivery Network (CDN) or reuse existing resources via the D2D communications channel. A novel architecture whereby the BS in the Radio Access Network (RAN) has video buffers, with caching maps which are aware of the video preferences of users in cell sites is analysed. This report is the foundation to build a proximity enhanced D2D live video streaming system with no additional infrastructure for mobile devices. The simulation results provide guidelines for maximising view and user diversity as well as bandwidth scalability in LTE networks in order to achieve high system resource utilisation whilst mitigating load on the network.
Description: M.SC.ICT TELECOMMUNICATIONS2019-01-01T00:00:00ZDesign of LDPC codec for the FX.25 pico-satellite link
/library/oar/handle/123456789/55049
Title: Design of LDPC codec for the FX.25 pico-satellite link
Abstract: In this dissertation, a novel Weighted Bit-Flipping (WBF) Low Density Parity Check (LDPC) decoder known as the Reliability Ratio-Based Self-Normalized WBF (RRSN-WBF) decoder is proposed. This proposed decoder is compared to the latest WBF decoders examined in the literature. Along with this decoder an encoding method with low complexity is also used. The encoding method used is the Richardson and Urbanke Upper Triangular encoding method. The RRSN-WBF decoder and Richardson and Urbanke encoding method are used to aid the University of Malta satellite radio project known as the UoMBSat1 whose aim is to launch a pico-satellite. To aid in the design of the communication system of this UoMBSat1 project the current available technologies for nano/pico-satellite systems which consider a SDR system were examined. During this examination special attention was given to the type of FEC and data-link protocols these systems used.
In this project, the AX.25 and FX.25 protocols were implemented. The aim was to show that LDPC codes can also be integrated with the FX.25 protocol. This was successfully achieved. Based on this implementation various analysis concerning the integrated LDPC codes with the FX.25 protocol are outlined.
Description: M.SC.ICT TELECOMMUNICATIONS2019-01-01T00:00:00ZClassification of brain haemorrhage in head CT scans using deep learning
/library/oar/handle/123456789/54274
Title: Classification of brain haemorrhage in head CT scans using deep learning
Abstract: A brain haemorrhage is defined as a bleed in the brain tissue and it is the third leading cause of mortality across all ages and are caused either by a haemorrhagic stroke, or a significant blow to the head. One of the most commonly used diagnostic tools for patients being treated for a brain injury or patients with symptoms of a stroke or rise in the intracranial pressure is non-contrast Computed Tomography (CT) scan.
Computer-Aided Diagnosis (CAD) systems have been developed and were introduced to aid radiologists and professionals in their decision making. Deep Learning CAD systems were not highly researched before, but due to recent advancements in technology, deep learning algorithms have become more popular, and are now being researched for their applications in medical imaging.
This study utilises deep learning models to develop a computer aided diagnosis (CAD) system to classify the different type of haemorrhages in head CT scans. The system was designed in such a way that it builds up on the work done on the final year projects of Mr John Napier and Ms Kirsty Sant; where Mr Napier’s work focused on developing a system that can detect the presence of a brain haemorrhage in CT scans; and Ms Sant’s work involved using a machine learning technique to classify the brain haemorrhages, based on the intensity, shape and texture features.
Deep learning architectures, namely ResNet, DenseNet, and InceptionV3 architectures were analysed, in order to find the best performing architecture to classify the different types of brain haemorrhages from head CT scans. Moreover, a linear Support Vector Machine, was also built in order to be able to compare the performance of these architectures with it.
The dataset was obtained from the General Hospital of Malta, totalling to 64 anonymised brain haemorrhage CT scans, 58 of these were used for training the deep learning models, and the remaining 6 cases were used to test the models. Each of the architectures were executed for 100 epochs, and the overall training accuracy was 0.1786 for ResNet, 0.2976 for DenseNet, 0.3690 for InceptionV3 and 0.6083 for the linear multiclass support vector machine.
Description: B.SC.(HONS)COMPUTER ENG.2019-01-01T00:00:00ZAudio effects library for Digital Signal Processor
/library/oar/handle/123456789/54273
Title: Audio effects library for Digital Signal Processor
Abstract: Digital Signal Processing (DSP) is continuously evolving and is used in various applications including audio. DSP is heavily used in the music and film industry.
There are a lot of offline algorithms and applications to process audio DSP but algorithms that perform DSP in real-time audio applications are more limited. These are used by live audio engineers and live musicians to enhance their instrument’s sound.
Such digital effects are usually computationally expensive if performed on a generic low-performance processor. Thus executing DSP code on a Digital Signal Processor is much more efficient. However today's low-performance processors are powerful enough for basic audio processing, but using a DSP device could improve quality for example by using more coefficients, or running more effects at the same time.
The aim of this project was to identify and implement a number of audio effects as an audio effects library that performs real-time DSP on audio signals. These effects include: gain, reverb, echo, chorus, tremolo, equalisation and pitch change. Moreover, a demonstration application that makes use of this audio effects library to be used in live audio applications was developed. The code had to be optimised as much as possible for an efficient execution on a Digital Signal Processing Board. The library was used successfully in an embedded system with an ARM Cortex-M4 processor.
Tests confirm proper operation of the digital audio effects. Audio was played back to ensure there were no audible artefacts.
Description: B.SC.(HONS)COMPUTER ENG.2019-01-01T00:00:00Z