Robust machine learning methods

Sammanfattning: We are surrounded by data in our daily lives. The rent of our houses, the amount of electricity units consumed, the prices of different products at a supermarket, the daily temperature, our medicine prescriptions, our internet search history are all different forms of data. Data can be used in a wide range of applications. For example, one can use data to predict product prices in the future; to predict tomorrow's temperature; to recommend videos; or suggest better prescriptions. However in order to do the above, one is required to learn a model from data. A model is a mathematical description of how the phenomena we are interested in behaves e.g. how does the temperature vary? Is it periodic? What kinds of patterns does it have? Machine learning is about this process of learning models from data by building on disciplines such as statistics and optimization. Learning models comes with many different challenges. Some challenges are related to how flexible the model is, some are related to the size of data, some are related to computational efficiency etc. One of the challenges is that of data outliers. For instance, due to war in a country exports could stop and there could be a sudden spike in prices of different products. This sudden jump in prices is an outlier or corruption to the normal situation and must be accounted for when learning the model. Another challenge could be that data is collected in one situation but the model is to be used in another situation. For example, one might have data on vaccine trials where the participants were mostly old people. But one might want to make a decision on whether to use the vaccine or not for the whole population that contains people of all age groups. So one must also account for this difference when learning models because the conclusion drawn may not be valid for the young people in the population. Yet another challenge  could arise when data is collected from different sources or contexts. For example, a shopkeeper might have data on sales of paracetamol when there was flu and when there was no flu and she might want to decide how much paracetamol to stock for the next month. In this situation, it is difficult to know whether there will be a flu next month or not and so deciding on how much to stock is a challenge. This thesis tries to address these and other similar challenges.In paper I, we address the challenge of data corruption i.e., learning models in a robust way when some fraction of the data is corrupted. In paper II, we apply the methodology of paper I to the problem of localization in wireless networks. Paper III addresses the challenge of estimating causal effect between an exposure and an outcome variable from spatially collected data (e.g. whether increasing number of police personnel in an area reduces number of crimes there). Paper IV addresses the challenge of learning improved decision policies e.g. which treatment to assign to which patient given past data on treatment assignments. In paper V, we look at the challenge of learning models when data is acquired from different contexts and the future context is unknown. In paper VI, we address the challenge of predicting count data across space e.g. number of crimes in an area and quantify its uncertainty. In paper VII, we address the challenge of learning models when data points arrive in a streaming fashion i.e., point by point. The proposed method enables online training and also yields some robustness properties.

  KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)