Intelligence-Driven Software Performance Assurance

Sammanfattning: Software performance assurance is of great importance for the success of software products, which are nowadays involved in many parts of our life. Performance evaluation approaches such as performance modeling, testing, as well as runtime performance control methods, all can contribute to the realization of software performance assurance. Many of the common approaches to tackle challenges in this area involve relying on performance models or using system models and source code. Although modeling provides a deep insight into the system behavior, developing a  detailed model is challenging.  Furthermore, software artifacts such as models and source code might not be readily available at all times in the development lifecycle. This thesis focuses on leveraging the potential of machine learning (ML) and evolutionary search-based techniques to provide viable solutions for addressing the challenges in different aspects of software performance assurance efficiently and effectively.In this thesis, we first investigate the capabilities of model-free reinforcement learning to address the objectives in robustness testing problems. We develop two self-adaptive reinforcement learning-driven test agents called SaFReL and RELOAD. They generate effective platform-based test scenarios and test workloads, respectively. The output scenarios and workloads help testers and software engineers meet their objectives efficiently without relying on models or source code. SaFReL and RELOAD learn the optimal policies (ways) to meet the test objectives and can reuse the learned policies adaptively in other testing settings. Policy reuse can lead to higher test efficiency and cost savings, for example, when testing similar test objectives or software systems with comparable performance sensitivity.Next, we leverage the potential of evolutionary computation algorithms, i.e., genetic algorithms, evolution strategies, and particle swarm optimization, to generate failure-revealing test scenarios for robustness testing of AI systems. In this part, we choose autonomous driving systems as a prevailing example of contemporary AI systems. We study the efficacy of the proposed evolutionary search-based test generation techniques and evaluate primarily to what extent they can trigger failures. Moreover, we investigate the diversity of those failures and compare them to existing baseline solutions. Finally, we again use the potential of model-free reinforcement learning to develop adaptive ML-driven runtime performance control approaches. We present a response time preservation method for a sample type of industrial applications and a resource allocation technique for dynamic workloads in a data grid application. The proposed ML-driven techniques learn how to adjust the tunable parameters and resource configuration at runtime to keep the performance continually compliant with the requirements and to further optimize the runtime performance. We evaluate the efficacy of the approaches and show how effectively they can improve the performance and keep the performance requirements satisfied under varying conditions such as dynamic workloads and the occurrence of runtime events that lead to substantial response time deviations.

  Denna avhandling är EVENTUELLT nedladdningsbar som PDF. Kolla denna länk för att se om den går att ladda ner.