|
|
|
|
5 | (6) |
|
1.1 The Many Faces of Benchmarking |
|
|
5 | (1) |
|
1.2 What's Inside the Cloud? |
|
|
6 | (1) |
|
1.3 Quality Insights through Experimentation |
|
|
7 | (1) |
|
1.4 The Cloud as Experimentation Testbed |
|
|
8 | (1) |
|
1.5 What to Expect from this Book |
|
|
8 | (1) |
|
1.6 Cloud Service Benchmarking in the Larger Context of Benchmarking |
|
|
9 | (2) |
|
|
11 | (6) |
|
2.1 What is Cloud Service Benchmarking? |
|
|
11 | (1) |
|
2.2 Benchmarking vs. Monitoring |
|
|
12 | (2) |
|
2.3 Building Blocks of Benchmarks |
|
|
14 | (3) |
|
|
17 | (10) |
|
|
17 | (1) |
|
3.2 Quality of (Cloud) Services |
|
|
18 | (1) |
|
3.3 Examples of Qualities |
|
|
19 | (3) |
|
|
19 | (1) |
|
|
20 | (1) |
|
|
20 | (1) |
|
3.3.4 Elastic Scalability |
|
|
21 | (1) |
|
|
21 | (1) |
|
|
22 | (3) |
|
|
23 | (1) |
|
3.4.2 Security vs. Performance |
|
|
24 | (1) |
|
3.4.3 Latency vs. Throughput |
|
|
24 | (1) |
|
3.5 Service Level Agreements |
|
|
25 | (2) |
|
|
27 | (10) |
|
4.1 An Overview of Traditional Motivations |
|
|
27 | (1) |
|
|
28 | (1) |
|
4.3 Continuous Quality Improvement |
|
|
28 | (2) |
|
|
29 | (1) |
|
4.3.2 Requirements Elicitation and Service Selection |
|
|
30 | (1) |
|
4.3.3 Capacity Planning and Deployment Optimisation |
|
|
30 | (1) |
|
4.4 Comparative Studies and Systems Research |
|
|
30 | (1) |
|
4.5 Organizational Process Proficiency |
|
|
31 | (6) |
|
|
|
|
37 | (10) |
|
5.1 Requirements for Benchmarks |
|
|
37 | (1) |
|
5.2 Objectives of Benchmark Design |
|
|
38 | (5) |
|
5.2.1 General Design Objectives |
|
|
38 | (3) |
|
5.2.2 New Design Objectives in Cloud Service Benchmarking |
|
|
41 | (2) |
|
5.3 Resolving Conflicts between Design Objectives |
|
|
43 | (4) |
|
6 Quality Metrics and Measurement Methods |
|
|
47 | (14) |
|
6.1 What is a Quality Metric? |
|
|
47 | (1) |
|
6.2 Requirements for Quality Metrics |
|
|
48 | (1) |
|
6.3 Examples of Quality Metrics |
|
|
49 | (6) |
|
6.3.1 Performance Metrics |
|
|
50 | (1) |
|
6.3.2 Availability and Fault-Tolerance Metrics |
|
|
51 | (1) |
|
6.3.3 Data Consistency Metrics |
|
|
52 | (1) |
|
6.3.4 Elastic Scalability Metrics |
|
|
53 | (2) |
|
|
55 | (1) |
|
6.5 Defining New Quality Metrics |
|
|
56 | (2) |
|
|
58 | (3) |
|
|
61 | (12) |
|
7.1 Characterizing Workloads |
|
|
61 | (1) |
|
7.2 Synthetic vs. Trace-based Workloads |
|
|
62 | (1) |
|
7.3 Application-Driven vs. Micro-Benchmarks |
|
|
63 | (1) |
|
7.4 Open, Closed, and Partly-Open Workload Generation |
|
|
64 | (1) |
|
7.5 Scalability of Workload Generation |
|
|
65 | (2) |
|
7.5.1 Scaling Synthetic Workloads |
|
|
66 | (1) |
|
7.5.2 Scaling Trace-Based Workloads |
|
|
67 | (1) |
|
7.6 Comparing Workload Types and Generation Strategies |
|
|
67 | (6) |
|
Part III Benchmark Execution |
|
|
|
8 Implementation Objectives and Challenges |
|
|
73 | (12) |
|
8.1 An Overview of Challenges in Benchmark Implementations |
|
|
73 | (1) |
|
|
74 | (1) |
|
|
75 | (3) |
|
8.4 Measurement Results Collection |
|
|
78 | (2) |
|
8.5 Reproducibility and Repeatability |
|
|
80 | (1) |
|
|
81 | (1) |
|
|
82 | (3) |
|
9 Experiment Setup and Runtime |
|
|
85 | (16) |
|
9.1 An Experiment Setup and Execution Process |
|
|
85 | (1) |
|
9.2 Resource Provisioning, Deployment, and Configuration |
|
|
86 | (3) |
|
9.3 Experiment Preparation |
|
|
89 | (1) |
|
|
90 | (1) |
|
|
90 | (2) |
|
|
92 | (1) |
|
|
93 | (2) |
|
|
95 | (6) |
|
Part IV Benchmark Results |
|
|
|
10 Turning Data into Insights |
|
|
101 | (12) |
|
10.1 A Process for Gaining Insights |
|
|
101 | (2) |
|
10.2 Exploratory Data Analysis |
|
|
103 | (2) |
|
10.3 Confirmatory Data Analysis |
|
|
105 | (2) |
|
|
107 | (6) |
|
10.4.1 Spreadsheet Software |
|
|
107 | (1) |
|
10.4.2 Databases and Data Warehouses |
|
|
108 | (1) |
|
10.4.3 Programming Languages and Frameworks |
|
|
109 | (1) |
|
10.4.4 Distributed Data Processing Frameworks |
|
|
110 | (3) |
|
|
113 | (10) |
|
11.1 Characteristics of Benchmarking Data |
|
|
113 | (3) |
|
|
116 | (1) |
|
|
117 | (1) |
|
|
118 | (3) |
|
|
121 | (2) |
|
|
123 | (14) |
|
12.1 What is Data Analysis? |
|
|
123 | (1) |
|
12.2 Descriptive Data Aggregation |
|
|
124 | (2) |
|
|
126 | (3) |
|
12.3.1 Visualizing Absolute Values |
|
|
126 | (2) |
|
12.3.2 Visualizing Value Distribution |
|
|
128 | (1) |
|
12.3.3 Visualizing Relationships |
|
|
129 | (1) |
|
12.4 Advanced Analysis Methods |
|
|
129 | (8) |
|
|
130 | (3) |
|
12.4.2 Confirming Assumptions |
|
|
133 | (1) |
|
12.4.3 Making Predictions |
|
|
133 | (4) |
|
13 Using Insights on Cloud Service Quality |
|
|
137 | (14) |
|
13.1 Communicating Insights |
|
|
137 | (2) |
|
13.2 Acting based on Insights |
|
|
139 | (12) |
|
13.2.1 Consumption Decisions |
|
|
139 | (2) |
|
13.2.2 Service Configuration |
|
|
141 | (1) |
|
13.2.3 Application Design |
|
|
142 | (9) |
|
|
|
14 Getting Started in Cloud Service Benchmarking |
|
|
151 | (4) |
|
14.1 How to Read this Chapter |
|
|
151 | (1) |
|
14.2 Benchmarking Storage Services |
|
|
151 | (1) |
|
14.3 Benchmarking Virtual Machines |
|
|
152 | (1) |
|
14.4 Benchmarking Other Cloud Services |
|
|
153 | (2) |
|
|
155 | (4) |
|
15.1 The Importance of Cloud Service Benchmarking |
|
|
155 | (1) |
|
15.2 Summary of this Book's Content |
|
|
156 | (3) |
References and Web Links |
|
159 | (1) |
References |
|
159 | (5) |
Web Links |
|
164 | (3) |
List of Abbreviations |
|
167 | |