OAR@UM Collection: /library/oar/handle/123456789/62485 Wed, 05 Nov 2025 16:37:32 GMT 2025-11-05T16:37:32Z MIRAI : a modifiable, interpretable, and rational AI decision support system /library/oar/handle/123456789/121368 Title: MIRAI : a modifiable, interpretable, and rational AI decision support system Abstract: With the recent advancements and results obtained by Deep Learning, several corporations are eager to begin incorporating these algorithms into their workflow to benefit from these systems, especially with the emergence of Industry 4.0. However, decision makers using these systems find themselves unable to fully trust the AI from evaluation metrics alone, and require some more transparency in the rationale behind their systems. As such, research has gone in the direction of Explainable AI (XAI), where the creation of explainable algorithms or reverse-engineering of existing algorithm takes place to open up the Black Box of opaque AI algorithms. Both approaches present the outcome in a manner which is interpretable by humans. In this research project, we proposed an Explainable AI architecture for predictive analysis in industry. We intended to utilize a novel approach of combining the Rule-Based Reasoning methodology of the Differentiable Inductive Logic Programming (δILP) algorithm with an explainable Machine Learning (ML) algorithm, a Bidirectional Long-Short Term Memory (BiLSTM) Neural-Network (NN). The combination of these algorithms created a fully explainable system capable of a higher level of reasoning, or ’Deep Understanding’. In turn, this implementation of Deep Understanding allowed us to have more reliable and faithful explanations for the given application. Quantitative evaluation for this system took place by means of standard Machine Learning evaluation metrics such as F1-Scores, Precision, Recall, and Receiver Operating Characteristic (ROC) Curves. Our BiLSTM performed with an average of 85% over several metrics, and δILP performed at over 95%. We further evaluated our system by taking the derived interpretations, transforming them into English explanations via the inferences and deductions stored in a standardized Knowledge Base. We verified them with industry professionals to determine whether the deductions made sense in a practical context or not. From this, we understood that a combination of values in the acceleration and rotation in either the X and Y axis exclusively may lead to an error. Highlighting these features in an explanation and sorting them by their strength gives technicians an idea of what solution to apply when met with this explanation, saving time in deconstructing the problem when met with it, and in turn improving Overall Equipment Efficiency (OEE). In future work, we would enhance these explanations by moving our solution to prescriptive maintenance, where we also highlight possible solutions for the error indicated by MIRAI. Description: M.SC.ARTIFICIAL INTELLIGENCE Wed, 01 Jan 2020 00:00:00 GMT /library/oar/handle/123456789/121368 2020-01-01T00:00:00Z Learning DFAs from sparse data /library/oar/handle/123456789/85570 Title: Learning DFAs from sparse data Abstract: Regular inference is the task of identifying a Deterministic Finite-State Automaton (DFA) from a training set of positive and negative strings of a regular language. This problem is known to become increasingly more difficult as the size of the target DFA grows larger and as the training data becomes sparser. One of the most studied algorithms for this task is the Evidence-Driven State Merging (EDSM) algorithm due to Rodney Price which emerged as a winning algorithm in the international Abbadingo One competition of 1997. We focus on ‘Abbadingo-style’ problems (learning DFAs over binary alphabets with training sets of various densities), and we present the results of a comprehensive analysis of how, and more importantly when, EDSM succeeds or fails in identifying either the exact target DFA, or a low-error hypothesis with respect to a test set. We do this by creating thousands of problem instances to study their characteristics, as well as the behaviour of the state merging process. To support this analysis, we have developed an extensive software framework consisting of highly optimised, and parallelised, implementations of state merging algorithms, as well as several visual and statistical analysis tools. Motivated by the results and insights we obtained following this analysis, we propose three methods each having the aim of improving on the generalisation rate of EDSM on Abbadingo-style problems when the training data is sparse. Our first method involves the development of an ensemble of monotonic, greedy heuristics which, together, are able to outperform EDSM. This method is inspired by Wolpert and Macready’s No Free Lunch theorems which roughly state that, in general, no single heuristic will dominate all others over all problem instances. This is indeed supported by the empirical evidence we have gathered during our experimentation on various problem configurations. Associated with the ensemble of heuristics, we have also identified a method which enables us to predict, with a high degree of confidence, which of the individual heuristics in the ensemble results in a low or zero-error hypothesis. Our second approach, which we call the Delta Graph method, is based on the observation that when a greedy heuristic selects sequences of merges, the initial ones are especially critical. When a wrong choice is made, finding the target DFA becomes impossible and the likelihood of finding a low-error hypothesis will be greatly reduced. This method involves constructing and non-monotonically searching in a structure representing a highly condensed subspace of possible merges. This structure contains several short sequences of merges, where, with high experimental probability, at least one of them will consist of correct merges leading to the target. These merges establish enough constraints on a partial hypothesis that, when extended with a label-matching heuristic, will lead to the target DFA or a low-error hypothesis. Typical evolutionary approaches in DFA learning operate by attempting to evolve a target DFA either as a transition matrix or by partitioning the states of a Prefix Tree Acceptor (PTA). In our third method, we present an alternative approach which, instead, evolves short sequences of merges selected from a subset of high state-reduction merges. As in the Delta Graph method, these short sequences of merges establish enough constraints on a hypothesis, that when extended with a label-matching heuristic, will, with high experimental probability, lead to the target DFA or a low-error hypothesis. To ensure a common baseline for comparison, our methods are evaluated on target DFAs and training sets which have been constructed according to the Abbadingo One competition procedures. Our results show that each of the methods we have developed outperforms EDSM. For example, on 64-state target DFA problems and symmetrically structurally complete training sets at the sparsest density set by the Abbadingo One competition, while EDSM identifies low-error DFAs approximately 15% of the time, our ensemble, Delta Graph, and evolutionary methods do so about 26%, 43%, and 56% of the time respectively. We also obtain considerably better generalisation rates on problem instances which are highly adversarial to EDSM. Description: PH.D.ARTIFICIAL INTELLIGENCE Wed, 01 Jan 2020 00:00:00 GMT /library/oar/handle/123456789/85570 2020-01-01T00:00:00Z Runtime verification for API based software /library/oar/handle/123456789/83645 Title: Runtime verification for API based software Abstract: The modern world is increasingly dependent on communication-centred and distributed software systems. Application Programming Interfaces (APIs) are what enable these systems to interact and share information with one another. The correctness of their interaction is of utmost importance since the consequences of failure can be quite severe. Behavioural types offer a promising approach to address this matter by enabling the description of correct interactions which systems are then verified against. This work investigates the design and implementation of a hybrid (static and dynamic) approach for the verification of these types. In particular, we address those scenarios with two communicating components where one is available prior-deployment and the other must be treated as a black box. We propose an approach in which the available component is statically checked whereas the black box component is verified dynamically at runtime (i.e., during execution) via a monitor. At the centre of our approach is a synthesis algorithm which generates monitor descriptions from a behavioural type. We show that these generated monitors are themselves correct, and that they only flag an erroneous action when the monitored component indeed commits one. This is crucial since the monitors can negatively affect the behaviour of the other components if they are not correct. To validate the implementability of this approach, we present a tool in Scala as a case study. We show that the executable code closely follows the theory, which suggests that the notion of correctness established in the theory carries over into the implementation. Furthermore, the proposed theory and implementability aspects of our approach are not limited to this technology but can be applied to those cases where static analysis of session types is possible and the dynamic verification of the interaction of a component is required. Description: M.SC.COMPUTER SCIENCE Wed, 01 Jan 2020 00:00:00 GMT /library/oar/handle/123456789/83645 2020-01-01T00:00:00Z Developing theoretical foundations for runtime enforcement /library/oar/handle/123456789/80746 Title: Developing theoretical foundations for runtime enforcement Abstract: The ubiquitous reliance on software systems is increasing the need for ensuring their correctness. Runtime enforcement is a monitoring technique that uses monitors that can transform the actions of a system under scrutiny in order to alter its runtime behaviour and keep it in line with a correctness specification; these type of enforcement monitors are often called transducers. In runtime enforcement there is often no clear separation between the specification language describing the correctness criteria that a system must satisfy, and the monitoring mechanism that actually ensures that these criteria are met. We thus aim to adopt a separation of concerns between the correctness specification describing what properties the system should satisfy, and the monitor describing how to enforce these properties. In this thesis we study the enforceability of the highly expressive branching time logic μHML, in a bid to identify a subset of this logic whose formulas can be adequately enforced by transducers at runtime. We conducted our study in relation to two different enforcement instrumentation settings, namely, a unidirectional setting that is simpler to understand and formalise but limited in the type of system actions it can transform at runtime, and a bidirectional one that, albeit being more complex, it allows transducers to effect and modify a wider set of system actions. During our investigation we define the behaviour of enforcement transducers and how they should be embedded with a system to achieve unidirectional and bidirectional enforcement. We also investigate what it means for a monitor to adequately enforce a logic formula, and define the necessary criteria that a monitor must satisfy in order to be adequate. Since enforcement monitors are highly intrusive, we also define a notion of optimality to use as a guide for identifying the least intrusive monitor that adequately enforces a formula. Using these enforcement definitions, we identify a μHML fragment that can be adequately enforced via enforcement transducers that drop the execution of certain actions. We then show that this fragment is maximally expressive, i.e., it is the largest subset that can be enforced via these type of enforcement monitors. We finally look into static alternatives to runtime enforcement and identify a static analysis technique that can also enforce the identified μHML fragment, but without requiring the system to execute. Description: PH.D. Wed, 01 Jan 2020 00:00:00 GMT /library/oar/handle/123456789/80746 2020-01-01T00:00:00Z