OAR@UM Collection: /library/oar/handle/123456789/90852 2025-11-04T15:58:39Z Forest mapping and classification from satellite imagery /library/oar/handle/123456789/120583 Title: Forest mapping and classification from satellite imagery Abstract: In recent decades, the availability of remote sensing data for tree species classification, has been increasingly utilised for forest management, and especially so with the increased urgency incurred by climate change. Academia and commercial entities are investing in research to achieve more reliable classification models. This research aims to provide a detailed background of the existing literature, emerging trends, potential areas of research that can be explored further to improve results, and evaluating a further comparison between various classification models over a single contiguous region at the center of Metropolitan France. Feature analysis was made with results rating statistically derived features for Sentinel-2 optical indices for mean and variance within a range of adjacent pixels. Pixel-level supervised classifiers were evaluated comparatively with random forest classifier achieving a kappa score of about 40%, with basic neural network implementations such as deep feed-forward networks, temporal convolutional models, and LSTM models, reaching a kappa score, at best, of around 65% to 70%. A significant portion of this dissertation also investigates the use of U-Net image segmentation, and outlining the results exhibited, under a limited feature set and sparse reference data available, with the results of such models providing reasonable evaluations, ranging from kappa scores of 52% to 83%, depending on the level of detail required from the classifier. Description: M.Sc.(Melit.) 2021-01-01T00:00:00Z Multi-modality and multi-sensor image registration for satellite images /library/oar/handle/123456789/119234 Title: Multi-modality and multi-sensor image registration for satellite images Abstract: Satellite imagery provides information which is fundamental to remote sensing applications. Two of such applications are image registration and fusion of hyperspectral and multispectral imagery. Image registration is a fundamental pre-processing step to image fusion. Conjunctively, in remote sensing little previous work aimed at the registration of satellite imagery with significant scale differences and the registration of multi-modal satellite images. The aim of this work was to do a comprehensive analysis of registration techniques in remote sensing. The SIFT algorithm with different parameter sets was utilised to register thermal-thermal satellite imagery with significant scale differences. The work also examined and compared the use of other feature-based, area-based and optical flow-based techniques for the registration of multi-modal and multi-sensorial satellite imagery. The multi-modal data includes optical satellite imagery from Sentinel-2 and Landsat-8. SAR images from Sentinel-1 and thermal images from Landsat-8 and Sentinel-3. The findings of this study show that the most common type of modality utilised in the image registration of remote sensing data is Optical-Optical and synthetic aperure radar (SAR)-Optical. For the registration of thermal Landsat-8 to Landsat-8 and thermal Landsat-8 to Sentinel-3, the general pattern was that as one upscaled the sensed image, the misregistration and RMSE increased due to a higher scale difference. For the registration of SAR-Optical satellite imagery the overall best performing was SIFT-Flow. For the registration of single modality data, the overall best was SIFT followed by the Enhanced Correlation Coefficient (ECC). For the registration of multi-modal satellite imagery, the overall best was SIFT. Description: M.Sc.(Melit.) 2021-01-01T00:00:00Z Manufacturing process anomaly detection in RF cavities /library/oar/handle/123456789/112002 Title: Manufacturing process anomaly detection in RF cavities Abstract: In order to accelerate charged particles and output them at a constant and controllable energy, modern particle accelerators make use of hollow, torus-like metal structures known as Radio Frequency (RF) cavities. An electromagnetic field is applied to the cavities, which in turn efficiently transfers the field’s energy to passing ions, accelerating them to a target speed. As newer accelerator designs are required to output particles at ever increasing energy levels, cavities are operated in their superconductive state in order to achieve a much higher accelerating gradient. The cavities are typically constructed as multiple cells, each consisting of two separate halves which are then welded together. The welding process heats up the surrounding cavity material, making it more susceptible to the formation of defective regions. Other types of anomalies can also manifest on the internal cavity surface, such as scratches caused by improper handling of the cavity and contamination from foreign objects. These anomalies are liable to affect the performance of the cavities through a process known as quenching, where defective regions experience an increase in temperature. This in turn heats up material surrounding the defect, bringing the cavity out of the superconductive state and greatly reducing the accelerating gradient. Well established diagnostic tests, such as the RF cold test used to locate these anomalies are available, but these require the cavity to be operated at its superconducting state, which is time consuming, expensive and requires multiple trained operators to perform. Instead, vision based systems which mark anomalies based on their physical appearance have been proposed as a quicker preliminary diagnostic tool. This work seeks to improve current cavity visual inspectors by proposing an optical system for a pre-existing prototype inspection robot located at the European Organisation for Nuclear Research (CERN). The system is able to scan the entire interior cavity surface at a high enough spatial resolution such that anomalies as small as 10µm in length can be reliably detected. As a full scan of each cavity produces several thousands of images, an automated anomaly detection and localisation model is employed. The model makes use of both high resolution edge features from a wavelet based detector to provide accurate localisation information while regecting false edges originating from noise, as well as contextual ones extracted from layers of a pre-trained neural network in order to detect the presence of anomalies. On the obtained cavity image dataset, the model achieved a sensitivity and specificity of 78% and 61% respectively, successfully identifying the anomalies most likely to affect the cavity performance. Description: M.Sc. (Melit.) 2021-01-01T00:00:00Z Learning garment synthesis through a shared multimodal approach /library/oar/handle/123456789/111679 Title: Learning garment synthesis through a shared multimodal approach Abstract: Designing real and virtual garments has become extremely demanding due to the increased need for synthesising realistically dressed digital humans for video games and movies. The traditional workflow involves a trial-and-error procedure in which a mannequin is draped to judge the resultant folds, a process which is carried out iteratively until the desired look has been achieved. This work presents a garment synthesis pipeline without the need for simulating the final garment by using a multimodal dataset that consists of garment sketches, body parameters and 3D garment meshes in which each domain consists of a different number of dimensions. The pipeline begins by synthesising garment sketches using a Generative Adversarial Network (GAN), followed by filtering out the dissimilar generated garment sketches using an Autoencoder (AE) with anomaly detection. A quantitative evaluation was carried out between a variety of GAN and AE models against the training dataset which showed that a Wasserstein GAN with Gradient Penalty (WGAN-GP) and Angle Based Outlier Detection (ABOD) produced the best results. A Variational Autoencoder (VAE) model was also used to analyse the distribution similarity between the real and generated garment sketches. A multimodal AE with a shared embedding is then trained using the multimodal dataset that can predict across these domains by knowing the garment representation in at least one of the domains. Finally, a fitting algorithm is developed to dress the 3D mannequin mesh with the 3D garment mesh based on the associated body parameters. These dressed mannequins are checked using a simple geometric criterion to discard invalidly dressed mannequins. A qualitative evaluation of garment sketch synthesis, multimodal garment design, and the proposed pipeline quality was carried out using a rating and preference judgment survey. The 32 participants highlighted that (i) WGAN-GP inliers performed closely to the training set, (ii) WGAN-GP outliers performed significantly worse than the WGAN-GP inliers, (iii) characters within animated crowds were perceived similar but different, and (iv) a crowd without clones achieved better quality than a crowd with clones. Description: M.Sc. (Melit.) 2021-01-01T00:00:00Z