OAR@UM Collection:/library/oar/handle/123456789/1045602025-11-09T18:14:14Z2025-11-09T18:14:14ZReal-time global illumination on distributed systems/library/oar/handle/123456789/1179692024-02-01T05:52:54Z2022-01-01T00:00:00ZTitle: Real-time global illumination on distributed systems
Abstract: Realistic computer-generated visuals captivate users and provide them
with immersive experiences. Synthesising these images in real time
requires significant computational power and is out of reach for a wide
range of commodity hardware, particularly for mobile devices. Remote
rendering solves this problem by computing frames on the Cloud and
streaming the results to the client device as video. This solution provides
good image quality but introduces latency, which may make applications
appear unresponsive, degrading user experience. It may also require
significant bandwidth. This work investigates an alternative distributed
rendering strategy, where the computational power of the local device
is not discarded but instead used to eliminate or reduce latency. Three
methods are presented, all using a client-server architecture that splits the
rendering pipeline between a powerful remote endpoint and a weaker
local device. The first makes use of sparse irradiance sampling on a
voxelised representation of the scene. It supports multiple clients and is
highly configurable, allowing the use of different interpolation schemes
according to the capability of the device; image quality can be reduced to
lower reconstruction cost. The second stores radiance in a megatexture
and communicates it to the client device, where rendering is performed
at a low cost by sampling the megatexture. A coarse megatexture that fits
into GPU memory is maintained on the client device and used to provide
temporary low-quality output until high-quality server data are received.
The third uses the double warping image-based rendering technique to
produce novel views from two reference views. The client device also
receives irradiance data which it caches in a coarse megatexture. The
cache is used in a fallback mechanism that mitigates visual artefacts due
to holes in the data. The results show that input lag can be eliminated and
bandwidth requirements can be kept low, while retaining a measure of
fault tolerance and decent image quality. It is envisaged that distributed
methods similar to the ones proposed will gain more traction as the
computational capability of commodity devices increases and they can
be assigned larger workloads and use more sophisticated algorithms.
Description: Ph.D.(Melit.)2022-01-01T00:00:00ZSmart contract proxy analysis/library/oar/handle/123456789/1071792023-03-08T07:52:50Z2022-01-01T00:00:00ZTitle: Smart contract proxy analysis
Abstract: The evolution of Smart Contract protocols both in respect to size and complexity has led to the creation of new design patterns, centered around modularity, maintainability and upgradeability. One such emerging pattern in the Ethereum space is the Diamond Pattern. The Diamond Pattern is analogous to a reverse-proxy in Web2 infrastructure, as it provides a singular endpoint to a smart contract protocol whose implementation is split across multiple smart contracts. The state (storage) across the implementation contracts is consolidated in the proxy contract through the use of the delegatecall opcode. Although mechanisms exist to ensure implementation contracts can operate over segmented sections of the storage (state), a portion of the state will always remain shared and mutable. Incompatibilities in the manipulation of these storage variables across implementation contracts can introduce unique vulnerabilities which can go unnoticed when observing a single contract. Current state of the art static analysis tools do not take into account the unique intricacies of having shared mutable state across multiple smart contracts. This study introduces a general technique for multi-contract analysis under delegatecall , through the modularisation of the Gigahorse analysis framework and the propagation of storage facts between smart contracts during analysis execution. Following this we present a new tool called SOuL-Splitter, which generates multi-contract evaluation test sets through automated decomposition of existing smart contracts. Overall, we find that our analysis technique is highly effective, with some vulnerabilities exhibiting over a 70% improvement in recall as compared with their single contract counterparts. We also find evidence of increased adoption of the Diamond pattern in the Ethereum space, validating the need for, and value of, this research.
Description: B.Sc. (Hons)(Melit.)2022-01-01T00:00:00ZUsing runtime verification to generate intrusion timelines from memory images/library/oar/handle/123456789/1071432023-03-08T07:51:57Z2022-01-01T00:00:00ZTitle: Using runtime verification to generate intrusion timelines from memory images
Abstract: Every action carried out in a system is recorded and logged in memory. These are known as events and consists of various types for example opening a file, downloading a file, and accessing a network. Analysing such events is useful to detect security breaches and computer misuse. However, the events stored in memory are not always analysed; there are various steps needed to put them into a timeline for easier analysis. Manual checking for any intrusion is impractical since it will take a lot of time to go through all the events which occur continuously. Therefore, automated tools are important and are much needed in this scenario. In this project, patterns of insiders intrusion threats are generated. The first step is to create a memory image out of a system memory. The next step is extracting events from memory images and construct the timeline using the ready-made tool Volatility. For testing, different scenarios are created to see how patterns of insider threats can be detected through the timeline. The main part involves Runtime verification for going through these timelines to see if any insider threats are found. Larva is used for the analysis and timelines will have rules that need to be followed, in the form of transitions. These transitions represent the moves between the states of the timelines. An output file is generated while checking for timelines and if any intrusion is found it will be reported.
Description: B.Sc. (Hons)(Melit.)2022-01-01T00:00:00ZA study on the prediction of cryptocurrency price trend using k-means clustering and KNN classification/library/oar/handle/123456789/1071422023-03-08T07:50:56Z2022-01-01T00:00:00ZTitle: A study on the prediction of cryptocurrency price trend using k-means clustering and KNN classification
Abstract: The use of cryptocurrencies has increased all over the world. It is very hard to predict the future value of cryptocurrencies due to its high volatility. Therefore, cryptocurrency price prediction is a subject of interest to various individuals including investors. Numerous research has been done, however mostly it focuses on the use of supervised deep learning models, which are typically prone to overfitting and hence end up compromising the accuracy of the model. On the other hand, literature has given less attention to unsupervised approaches. This dissertation will investigate the predictability of cryptocurrency movement using an unsupervised learning technique, K-means clustering and a supervised learning technique, KNN classification. These algorithms will be compared with other deep learning models implemented in previous studies, to test if these less complex structures can achieve similar accuracies to more complex deep learning models. The predictive accuracy will be calculated and compared with the accuracy achieved by the state-of-the-art models proposed by Vella Critien et al in their paper titled “Bitcoin Price Change and Trend Prediction Through Twitter Sentiment and Data Volume” [9].
Description: B.Sc. (Hons)(Melit.)2022-01-01T00:00:00Z