OAR@UM Collection:/library/oar/handle/123456789/146732026-01-01T03:31:09Z2026-01-01T03:31:09ZRuntime verification and compensations/library/oar/handle/123456789/1014962022-09-12T08:09:31Z2012-01-01T00:00:00ZTitle: Runtime verification and compensations
Abstract: As systems grow more complex, it becomes increasingly difficult to ensure their correctness. One approach for added assurance is to monitor a system's execution so that if a specification violation occurs, it is detected and potentially rectified at runtime. Known as runtime verification, this technique has been gaining popularity with its main drawback being that it uses system resources to perform the checking. An effective way of minimising the intrusiveness of runtime verification is to desynchronise the system from the monitor, meaning that the system and the monitor progress independently. The problem with this approach is that the monitor may fall significantly behind the system and by the time the monitor detects a violation, it might be too late to make a correction. To tackle this issue we propose a monitoring architecture, cLARVA, providing fine control over the system-monitor relationship, enabling the monitor to be synchronised to the system by fast-forwarding through monitoring states, and the system to synchronise with the monitor by reverting it to an earlier state. Going back through system states is generally hard to automate since not all actions are reversible; reverse actions may expire, the reverse of an action may be context-dependent and so on. This subject, known as compensations, has been studied in the context of transactions where the reversal of incomplete transactions is used to ensure that either the transaction succeeds completely or it leaves no effect on the system. Although a lot of work has been done on compensations, the literature still presents challenges to compensation programming. We show how these limitations can be alleviated by separating compensation programming concerns from other concerns. Inspired by monitor-oriented programming - a way of using runtime verification to trigger functionality - we propose a novel monitor-oriented notation, compensating automata, for compensation programming. Integrated within a monitoring framework which we call monitor-oriented compensation programming, this notation enables a programmer to program and trigger compensation execution completely through monitoring with the system being unaware of compensations. Finally, we show how compensating automata can be used for programming the synchronisation between the system and the monitor in cLARVA, enabling
complex compensation logic to be seamlessly programmed. To evaluate our approach, we applied it to an industrial case study based on a financial system handling virtual credit cards, consisting of thousands of users and transactions. The results show that the architecture has been successful in both synchronising the monitor to the system by fast-forwarding the monitor, and also in synchronising the system to the monitor using compensations to reverse the system state - achieving a virtually non-intrusive runtime monitoring architecture.
Description: PH.D.2012-01-01T00:00:00ZFusing and recommending news reports using graph-based entity-relation representations/library/oar/handle/123456789/1009482022-08-31T07:00:57Z2012-01-01T00:00:00ZTitle: Fusing and recommending news reports using graph-based entity-relation representations
Abstract: When an event occurs in the real world, news reports describing this event start to appear on news sites on the World Wide Web within a few minutes of the occurrence of that event. If the event is significant, numerous news reports will appear on different sites, and each report will give its own description of the event based on the sources of information available to its author. Moreover, as time passes, each news site may publish new reports related to that event that will contain information that has just been discovered. For a person to obtain all the details related to a particular event, he/she will have to read through all the reports covering that event. The multitude of news reports being published on a continuous basis on the World Wide Web also presents an issue of information overload on users. A user would need to sift through a huge number of news reports to identify those reports that are of interest to him/her. News aggregator web sites may cluster related news reports, but they do not attempt to fuse the reports into a single document that contains all of the pertinent information about a single event without any repetition. Such web sites also tend to display news reports chronologically, and a user who tracks an event over the course of several days must sift through them to identify previously unread material. Tracker news reports tend to repeat information that the user may have already read. Some news aggregator web sites and other web services will alert users to breaking news about types of events, or more typically, about news involving a named entity or event type (e.g. earthquake). However, the user must generally intervene to provide details of the entity or event type to track. In this thesis, we tackle a number of research problems: in theory, a user can identify any RSS feed as a source of news he/she would like to receive; we then cluster reports about related news received from the separate RSS feeds as they arrive; we fuse the reports into a single document, trying to preserve a logical order in which sub-events occur and eliminating repetition; new reports related to an existing cluster are integrated into the fused document; the user's interaction with a fused report is monitored in such a way that information that the user has already read is summarised so that in the next visit the user can focus on the new (novel) news; a user model is maintained to automatically identify entities and event types that appear to be of interest to the user so
that he/she can be automatically alerted if a related new event occurs. We have developed the JNews news portal to implement our approach and
to provide an evaluation platform to measure its ability to: i) cluster related
news reports from disparate sources; ii) fuse related reports into a coherent document with minimal or no repetition but preserving all the information
contained in the source reports; iii) provide an adaptive reading environment that automatically summarises information in reports that have already been read; iv) automatically identify entities and event types that the user is likely to be interested in based on their past interactions with JNews to make personalised recommendations about previously unread breaking news. As we do not know the number of clusters in advance, JNews uses a modified K-Means clustering algorithm. We represent information contained in news reports using a simplified version of Sowa's Conceptual Graphs. The graph representing a news event contains entities and their relationships. 福利在线免费 from related news reports is merged into a single graph. We keep track of the source sentences that express the relationships. The fused report is generated using the maximally expressive set of sentences, i.e. the sentences that contain most information about the entities and their relationships in the news report, and ensuring that all entities and relationships are expressed in the fused document. The advantage of using a simplified conceptual graph as the logical representation is that the entities and their relationships are represented canonically. We use the same graph to extract underlying patterns in information about types of events and/ or entities. If a user tends to read different fused reports about the same entity or event type then we can recommend similar breaking news to the user. In addition, we can recommend news using collaborative techniques. The user model is represented as a vector of weighted keywords. We use a summarisation technique, whereby the repetition of information across different documents is considered to be an indication of salience of that information, to present summaries of a fused report (containing only the most important information) that have already been read by a user. All components of JNews were designed to run fast without excessive computational resources so as to function well in an operational environment and be able to handle large amounts of data. The evaluation of JNews is performed on its three main components the Document Clustering Component, the Document Fusion Component, and the 福利在线免费 Filtering (recommendation) Component. The Document Clustering Component was evaluated using three different datasets. We found that our Document Clustering Component is very good in performing fine-grained clustering, but performs rather poorly when performing coarser-grained clustering. The Document Fusion component was
evaluated using a set of news reports downloaded from MSNBC News that cite their sources, and also using human evaluation. We show that the Document Fusion component is able. to capture most of the information found across different source documents whilst maintaining readability. A corpus of news reports downloaded from Yahoo! News is used to evaluate the 福利在线免费 Filtering component. The results obtained are better than the baseline Rocchio algorithm without negative feedback.
Description: PH.D.ARTIFICIAL INTELLIGENCE2012-01-01T00:00:00ZMarkerless localisation and path planning for the visually impaired/library/oar/handle/123456789/955172022-05-11T12:29:59Z2012-01-01T00:00:00ZTitle: Markerless localisation and path planning for the visually impaired
Abstract: A substantial number of people are affected in various ways by visual impairments,
most of which have no effective cure. Visually impaired people are presented with a
challenging task when navigating through unfamiliar areas, and must depend on
additional aids, namely, guide dogs, and any assistance provided by sighted people,
thus limiting their independence and privacy.
The goal of this study is to exploit the advantages of robust hardware, portability and
widespread use of mobile devices to provide a means of guidance to the visually
impaired in unfamiliar areas. With the aid of ongoing research being done in the area
of computer vision technology, the proposed system will aid the user in identifying the
location of the whereabouts while also providing navigation commands to arrive at the
desired destination.
The prototype was implemented using the client-server architecture, where the server
implements both the place recognition module and the path planning module while the
client acts as a peripheral device. The evaluation carried out on the implemented
prototype gave satisfying results with the place recognition module having a high
percentage in both the recall rate and precision. Both the place recognition module and
the path planning module returned results in a relatively short period of time.
Description: B.SC.ICT(HONS)ARTIFICIAL INTELLIGENCE2012-01-01T00:00:00ZCross document coreference resolution and disambiguation for named entities in user web history documents/library/oar/handle/123456789/955082022-05-11T10:01:33Z2012-01-01T00:00:00ZTitle: Cross document coreference resolution and disambiguation for named entities in user web history documents
Abstract: At present, search engine technology does not measure relevance according to the
information needs of the user, but rather to the query searched. This is not an ideal
approach since different users use identical queries for different information needs.
One of the reasons this may happen is because of ambiguity between named entities
such as persons, organisations, locations, etc. This dissertation attempts to solve the
problem from the client's side by using a baseline streaming cross document
coreference resolution approach to discover and disambiguate named entities from the
user's web history. Several orthographic and contextual similarity measures are used
for this task, including tests involving dice score and topic features. Cosine similarity
measure is then used to calculate the similarity between the named entity and the
clusters. The final score dictates whether the named entity is to be merged into a
cluster or to be formed into a new one. Queries submitted to the search engine are then
expanded by using coreference from the most similar cluster to that query. In order to
evaluate the system, the WePS-2007 testing corpus is used for relevancy and accuracy.
Description: B.Sc. IT (Hons)(Melit.)2012-01-01T00:00:00Z