OAR@UM Collection: /library/oar/handle/123456789/6642 Sun, 02 Nov 2025 01:47:47 GMT 2025-11-02T01:47:47Z Optimizing scheduling in a pharmaceutical company /library/oar/handle/123456789/93900 Title: Optimizing scheduling in a pharmaceutical company Abstract: Scheduling is very important task which is used on a daily basis. A "good" schedule will increase the company's' profit and customers will be more willing to buy products as they are being satisfied in the shortest period of time. It is also important as it may solve many different objective functions such as minimization of makespan; minimization of delays; and minimization of total completion time. There has been an extensive research about algorithms to solve such problems. An overview of these algorithms both from a deterministic theoretical part and stochastic theoretical part is provided. This dissertation mainly focuses on solving the minimization of makespan on identical parallel machines scheduling problem. A mixed integer linear program (MILP) is built to solve this scheduling problem by using real live data from a local pharmaceutical company. In addition, the Longest Processing Time (LPT) heuristic algorithm which is one of the famous and oldest scheduling algorithms is used to compare its results with the MILP problem results and the original schedule by the company. Description: B.SC.(HONS)STATS.&OP.RESEARCH Thu, 01 Jan 2015 00:00:00 GMT /library/oar/handle/123456789/93900 2015-01-01T00:00:00Z Parameter estimation of Lévy processes /library/oar/handle/123456789/93899 Title: Parameter estimation of Lévy processes Abstract: Levy processes have become increasingly popular in mathematical finance because of their ability to capture the leptokurtic shape of stock returns and also the jumps observed in stock prices. In this dissertation we will present some of the theory and major results of Levy processes. In particular we shall focus on the Normal Inverse Gaussian and the Meixner process. Then we shall be looking at different parameter estimation methods for Levy processes, which can be split into two major categories: the parametric approach and nonparametric approach. For the nonparametric approach we shall consider a projection estimator proposed by Comte and Genon-Catalot [14] and also an estimator introduced by Rubin and Tucker [ 44]. In the parametric approach we consider the Integrated Sum of Squared Estimation proposed by Heathcote [28] and a Stochastic Programming method presented by Sant and Caruana [ 45]. Finally these methods of estimation are implemented on the Malta Stock Exchange Index and some results are compared were possible. Description: B.SC.(HONS)STATS.&OP.RESEARCH Thu, 01 Jan 2015 00:00:00 GMT /library/oar/handle/123456789/93899 2015-01-01T00:00:00Z Analyzing dichotomous and multichotomous categorical responses to assess self-esteem using response models /library/oar/handle/123456789/93897 Title: Analyzing dichotomous and multichotomous categorical responses to assess self-esteem using response models Abstract: Item Response Theory (IRT) is a statistical procedure, typically used in psychological measurement, with specific reference to the attitudes, abilities, achievement levels and personality traits of individuals. Its main aim is that of constructing and analyzing scores on a person's latent trait using questionnaires, personality assessments and surveys. IRT assesses the person's probability of rating an item in a particular manner according to a number of factors, namely the respondent's trait level (qualities of the individual), the item difficulty and the item discrimination (qualities of the item). Dichotomous IRT models have been developed to cater for two-category responses. The Rasch model establishes the probability of rating an item with a specific difficulty by a person having a particular trait level. If the item discrimination varies, then the Two-Parameter Logistic (2-PL) model is used. The Three-Parameter Logistic (3-PL) model generalizes the 2-PL model by introducing a guessing parameter. Multichotomous IRT models have been developed to cater for rating responses with more than two categories. The Rating Scale model (RSM) and the Partial Credit model (PCM) which belong to the polytomous family of Rasch models are also described. The 1- and 2- PL models as well as the RSM and the PCM are fitted to a data set related to self-esteem and are implemented using the facilities of STATA's subroutine gllamm. The questionnaire, which was distributed to 303 individuals, comprised ten items, each of which was rated on a 4-point Likert scale. A summary of the main findings is provided for each fitted model. Description: B.SC.(HONS)STATS.&OP.RESEARCH Thu, 01 Jan 2015 00:00:00 GMT /library/oar/handle/123456789/93897 2015-01-01T00:00:00Z Parametric and non-parametric estimation methods for latent variables /library/oar/handle/123456789/93896 Title: Parametric and non-parametric estimation methods for latent variables Abstract: The aim of this dissertation is to compare two estimation methods - the Maximization Expectation (EM) and the Non-Parametric Maximum Likelihood Estimation (NPMLE) approach to estimate a number of unobserved groups or latent classes. A medical data set related to patients suffering from schizophrenia was used to compare these two methods. The nonparametric maximum likelihood estimator of an unspecified distribution is a discrete distribution with nonzero mass probabilities at a finite number of mass points (locations). The true number of locations is determined when the likelihood is maximized using the concept of a directional derivative, called Gateaux derivative. The NPMLE algorithm is initialized by setting the number of mass-points (latent variable) to 1 and then searches for a new mass point over a fine grid covering a wide range of values. The algorithm is terminated if the directional derivative is non positive for all mass points. The method was applied to the medical data set and implemented using the facilities of GLLAMM, which is a subroutine of STATA. The approach yields posterior means, which are probabilities that a patient belong to each of the latent classes. Patients are then allocated to the latent class (segment) with the largest posterior mean. The EM algorithm uses a different approach in which observed data is augmented by the inclusion of unobserved data, which are 0-1 indicators indicating whether a patient belongs to a particular latent class. The posterior probabilities are the expected values of this unobserved data and are calculated using Bayes theorem. The EM algorithm was applied to the data set and implemented using the facilities of GLIM. Similar to the NPMLE approach, patients are then allocated to the latent class with the largest posterior probability. In this approach, both the clustering and estimation procedures are carried out simultaneously, where a regression model is fitted for each segment. Both the EM (parametric) and NPMLE (non-parametric) approach showed that the 2- segment model is the best model for the dataset. Both methods yielded similar parameter estimates for the regression models and similar allocation of patients to the two latent classes. The two estimation methods were compared for execution time. It was found that for a small number of latent classes the two methods yielded similar execution times; however as the number of segments is increased the EM approach converges at a faster rate than the NPMLE approach. The main advantage of the NPMLE approach is that it guarantees convergence to a global maximum; while the EM algorithm only guarantees convergence to a local maximum. Description: B.SC.(HONS)STATS.&OP.RESEARCH Thu, 01 Jan 2015 00:00:00 GMT /library/oar/handle/123456789/93896 2015-01-01T00:00:00Z