OAR@UM Collection:/library/oar/handle/123456789/681052025-11-05T19:23:04Z2025-11-05T19:23:04ZAutomation of the LHC collimator beam-based alignment procedure for nominal operation/library/oar/handle/123456789/686302021-02-08T07:22:54Z2020-01-01T00:00:00ZTitle: Automation of the LHC collimator beam-based alignment procedure for nominal operation
Abstract: The CERN Large Hadron Collider (LHC) is the largest particle accelerator in
the world, built to accelerate and collide two counter-rotating beams. The LHC
is susceptible to unavoidable beam losses, therefore a complex collimation system,
made up of around 100 collimators, is installed in the LHC to protect its superconducting
magnets and sensitive equipment.
The collimators are positioned around the beam following a multi-stage hierarchy.
These settings are calculated following a beam-based alignment (BBA)
technique, to determine the local beam position and beam size at each collimator.
This procedure is currently semi-automated such that a collimation expert must
continuously analyse the signal from the Beam Loss Monitoring (BLM) device positioned
downstream of the collimator. Additionally, angular alignment are carried
out to determine the most optimal angle for enhanced performance.
The human element, in both the standard and angular BBA, is a major bottleneck
in speeding up the alignment. This limits the frequency at which alignments
can be performed to the bare minimum, therefore this dissertation seeks to improve
the process by fully-automating the BBA.
This work proposes to automate the human task of spike detection by using
machine learning models. A data set was collated from previous alignment campaigns
and fourteen manually engineered features were extracted. Six machine
learning models were trained, analysed in-depth and thoroughly tested, obtaining
a precision of over 95%.
To automate the threshold selection task, data from previous alignment campaigns
was analysed to de ne an algorithm to execute in real-time, as the threshold
needs to be updated dynamically, corresponding to the changes in the beam losses.
The thresholds selected by the algorithm were consistent with the user selections
whereby all automatically selected thresholds were suitable selections.
Finally, this work seeks to identify the losses generated by each collimator, such
that any cross-talk across BLM devices is avoided. This involves building a crosstalk
model to automate the parallel selection of collimators, and seeks to determine
the actual beam loss signals generated by their corresponding collimators.
Manual, expert control of the alignment procedure was replaced by these dedicated
algorithms, such that the software was re-designed to achieve fully-automatic
collimator alignments. This software is developed in a real-time environment, such
that the fully-automatic BBA is implemented on top of the semi-automatic BBA,
thus allowing for both alignment tools to be available together and maintaining
backward-compatibility with all previous functionality. This new software was
used for collimator alignments in 2018, for both standard and angular alignments.
Automatically aligning the collimators decreased the alignment time by 70%,
whilst maintaining the accuracy of the results. The work described in this dissertation
was successfully adopted by CERN for LHC operation in 2018, and will
continue to be used in the future as the default collimator alignment software for
the LHC.
Description: PH.D.2020-01-01T00:00:00ZEvolutionary algorithms for globally optimised multipath routing/library/oar/handle/123456789/686292021-02-08T07:22:11Z2020-01-01T00:00:00ZTitle: Evolutionary algorithms for globally optimised multipath routing
Abstract: With the ever increasing rise of traffic generated on the Internet, the efficiency
with which a network operates has become of great importance. The use of a
distributed network architecture and single path routing algorithms limits the level
of efficiency a network is able to sustain. To tackle this problem, a set of novel,
globally optimal, multipath capable routing algorithms are proposed. The routing
algorithms are designed to increase the total network flow routed over a given
network, while giving preference to lower delay paths. Two routing algorithm
frameworks are proposed in this work; one using Linear Programming (LP) and
the other using a Multi-Objective Evolutionary Algorithm (MOEA). Compared to
Evolutionary Algorithms (EAs), which are inherently sub-optimal, the LP routing
algorithm is guaranteed to find a solution with the maximum load a network is able
to handle without exceeding the link’s capacity. However, LP solvers are unable
to concurrently optimise for more than one objective. On the other hand, EAs
are able to handle multiple, possibly non-linear objectives, and generate multiple
viable solutions from a single run. Even though EAs are inherently sub-optimal,
the EAs designed here manage to satisfy, on average, 98% of the demand found by
the optimal LP generated solution.
All routing algorithms designed in this work make use of Per-Packet multipath
because of its increased flexibility when compared to its Per-Flow multipath counterpart.
It is well known that connection oriented protocols, such as TCP, suffer
from severe performance degradation when used in conjunction with a Per-Packet
multipath routing solution. This problem is solved by adding a custom scheduler
to the Multipath TCP (MPTCP) protocol. Using the modified MPTCP protocol,
TCP flows are able to reach a satisfaction rate of 100%, with very high probability
even when that flow is transmitted over multiple paths. The combination of the
modified MPTCP protocol and the designed routing algorithm(s) led to a network
that is able to handle more load without sacrificing delay, when compared to OSPF
under all the conditions tested in this work using network simulations.
Description: PH.D.2020-01-01T00:00:00ZA tunnel structural health monitoring solution using computer vision and data fusion/library/oar/handle/123456789/685522021-02-05T06:54:19Z2020-01-01T00:00:00ZTitle: A tunnel structural health monitoring solution using computer vision and data fusion
Abstract: Tunnel structural health monitoring is predominantly done through periodic
visual inspections, requiring humans to be physically present on-site, possibly exposing
them to hazardous environments. Drawbacks associated with this include
the subjectivity of the surveys and, most of the time, the shutting down of operations
during the inspection. To mitigate these, an increasing effort was made
to automate inspections using robotics to reduce human presence and computer
vision techniques to detect defects along tunnel linings. While defect identification
is beneficial, comprehensive monitoring to identify changes on tunnel linings can
provide a more informative survey to further automate inspection and analysis.
CERN, the European Organisation for Nuclear Research has more than 50 km
of tunnels which need monitoring. This raised the need for a remotely operated
surveying system to monitor the structural health of the tunnels. Hence, a tunnel
inspection solution to monitor for changes on tunnel linings is proposed here.
Using a robotic platform hosting a set of cameras, tunnel wall images are automatically
and remotely captured. The tunnel environment poses a number of
challenges, with two of these being different light conditions and reflections on
metallic objects. To alleviate this, pre-processing stages were developed to correct
for the uneven illumination and to localise highlights. Crack detection using
deep learning techniques is employed following the pre-processing stages to identify
cracks on concrete walls. A change detection process is implemented through a
combination of different bi-temporal pixel-based fusion methods and decision-level
fusion of change maps. The evaluation of the proposed solution is made through
qualitative analysis of the resulting change maps followed by a quantitative comparison
with ground-truth changes. High recall and precision values of 81% and
93% were respectively achieved. The proposed solution provides a better means of
structural health monitoring where data acquisition is carried out on-site during
shutdowns or short, infrequent maintenance periods and post-processed off-site.
Description: PH.D.2020-01-01T00:00:00ZAutomated face reduction/library/oar/handle/123456789/685292021-02-04T11:14:24Z2020-01-01T00:00:00ZTitle: Automated face reduction
Abstract: With the introduction of the GDPR policy superseding the Data Protection Act,
any individual has the right to delete and control any personal data. Removing
frames from a footage and keeping the rest of the frames untouched is difficult to
achieve. Moreover, surveillance footage is important to be left untouched since it
is used as forensic evidence. Additionally, it will require a lot of manual work and
time to be able to review the whole footage and then proceed to nd all the frames
where the subject is visible and editing the footage.
An alternative solution is to manually select the faces to be blurred throughout
the footage. By blurring the faces, the actions remain legible and the footage
will remain usable while also following the new regulations set by the GDPR.
Semi-Automated Video Redaction methods exist commercially. For example, both
IKENA Forensic and Amped FIVE software packages allow the user to specify the
region of interest to be obfuscated. With the use of automated tracking techniques,
the subject or object of interest is followed throughout the footage. While this tool
facilitates the process, the user still needs to manually nd the person of interest
within the video which can take a lot of time. Moreover, one major problem with
these tools is that their licenses cost thousands of euros. In this dissertation, an
autonomous face detector and recognizer is implemented to identify the individual
within a crowd or group of people and obfuscate the face throughout the whole
footage where the individual is present.
The method developed during this dissertation automatically detects the person
of interest within the video footage and his face is blurred. Once a match is found,
the subject is back-tracked from the point of recognition to the beginning by making
use of an optical
ow algorithm to estimate the path taken by the subject to be able
to blur the face in the previous frames. Afterwards, as the process finishes, the video
is continued from the point of recognition till the end while also using the same
tracking algorithm and blurring the face in the rest of the frames. The output will
be the same video clip, however, the subject will have his face blurred throughout
all of the frames. This makes the process require no human intervention.
Extensive testing was carried out and it was evident that by implementing
the system as non-real time will net better results. The reasoning behind this
statement is due to the problem of resolution which hinders the performance of
object detection. Being able to process the video and have the ability to easily
manipulate the working conditions helped in achieving a recognition rate of 74%
and an IoU of 0.783. Whilst working in real time, the user is dependent on the
success of the detection. If the subject is not detected from the first frame that
he is present in, this will result in the face not being blurred at that instant but
rather become blurred further in the video frames. On the other hand, the non-real
time method, although takes more time to complete will net better results since
it makes use of object tracking to forward-track and back-track the subject once
identified.
Description: B.SC.(HONS)COMPUTER ENG.2020-01-01T00:00:00Z