A Cloud Benchmark Suite Combining Micro and Applications Benchmarks

Abstract

Micro and application performance benchmarks are commonly used to guide cloud service selection. However, they are often considered in isolation in a hardly reproducible setup with a flawed execution strategy. This paper presents a new execution methodology that combines micro and application benchmarks into a benchmark suite called RMIT Combined, integrates this suite into an automated cloud benchmarking environment, and implements a repeatable execution strategy. Additionally, a newly crafted Web serving benchmark called WPBench with three different load scenarios is contributed. A case study in the Amazon EC2 cloud demonstrates that choosing a cost-efficient instance type can deliver up to 40% better performance with 40% lower costs at the same time for the Web serving benchmark WPBench. Contrary to prior research, our findings reveal that network performance does not vary relevantly anymore. Our results also show that choosing a modern type of virtualization can improve disk utilization up to 10% for I/O-heavy workloads.

Publication
Companion of the 2018 ACM/SPEC International Conference on Performance Engineering

This paper was presented at the 4th International Workshop on Quality-Aware DevOps (QUDOS), which was co-located to the 9th ACM/SPEC International Conference on Performance Engineering (ICPE).