Cloud computing has started to play a major role in IT industry and the numerous cloud providers and services they offer are continuously growing. This increasing cloud service diversity, especially observed for infrastructure services, demands systematic benchmarking in order to assess cloud service performance and thus assist cloud users in service selection. However, manually conducting cloud benchmarks is time-consuming and error-prone. Therefore, previous work addressed these problems with automation approaches but has failed to provide a convenient way to automate the process of installing and configuring a benchmark. In order to solve this benchmark provisioning problem, this thesis introduces a benchmark automation framework called Cloud WorkBench (CWB) where benchmarks are entirely defined by means of code and executed without manual interaction. CWB allows to define configurable benchmarks in a modular manner that are portable across cloud providers and their regions. New benchmarks can be added at runtime and variations thereof are conveniently configurable via a web interface. Integrated periodic scheduling capabilities timely trigger benchmark executions that automatically acquire cloud resources, prepare and run the benchmark, and release previously acquired resources. CWB is used to conduct a case study in the Amazon EC2 cloud to examine a very specific performance characteristic for a combination of different service types. The results reveal limitations of the cheapest service type, show that performance does not necessarily correlate with service pricing, and illustrate that a new service type at the time of conducting this case study is able to reduce variability and enhance performance. CWB has already executed nearly 20000 benchmarks in total and is used in ongoing research.
The bachelor thesis was published by the University of Zurich at Merlin.