Muutke küpsiste eelistusi

Cloud Service Benchmarking: Measuring Quality of Cloud Services from a Client Perspective 1st ed. 2017 [Kõva köide]

  • Formaat: Hardback, 167 pages, kõrgus x laius: 235x155 mm, kaal: 4026 g, 11 Illustrations, color; 14 Illustrations, black and white; XIV, 167 p. 25 illus., 11 illus. in color., 1 Hardback
  • Ilmumisaeg: 03-Apr-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319554824
  • ISBN-13: 9783319554822
  • Kõva köide
  • Hind: 67,23 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 79,09 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 167 pages, kõrgus x laius: 235x155 mm, kaal: 4026 g, 11 Illustrations, color; 14 Illustrations, black and white; XIV, 167 p. 25 illus., 11 illus. in color., 1 Hardback
  • Ilmumisaeg: 03-Apr-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319554824
  • ISBN-13: 9783319554822

Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level.

The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web.

The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments.

Part I Fundamentals
1 Introduction
5(6)
1.1 The Many Faces of Benchmarking
5(1)
1.2 What's Inside the Cloud?
6(1)
1.3 Quality Insights through Experimentation
7(1)
1.4 The Cloud as Experimentation Testbed
8(1)
1.5 What to Expect from this Book
8(1)
1.6 Cloud Service Benchmarking in the Larger Context of Benchmarking
9(2)
2 Terms and Definitions
11(6)
2.1 What is Cloud Service Benchmarking?
11(1)
2.2 Benchmarking vs. Monitoring
12(2)
2.3 Building Blocks of Benchmarks
14(3)
3 Quality
17(10)
3.1 What is Quality?
17(1)
3.2 Quality of (Cloud) Services
18(1)
3.3 Examples of Qualities
19(3)
3.3.1 Performance
19(1)
3.3.2 Availability
20(1)
3.3.3 Security
20(1)
3.3.4 Elastic Scalability
21(1)
3.3.5 Data Consistency
21(1)
3.4 Tradeoffs
22(3)
3.4.1 CAP and PACELC
23(1)
3.4.2 Security vs. Performance
24(1)
3.4.3 Latency vs. Throughput
24(1)
3.5 Service Level Agreements
25(2)
4 Motivations
27(10)
4.1 An Overview of Traditional Motivations
27(1)
4.2 SLA Management
28(1)
4.3 Continuous Quality Improvement
28(2)
4.3.1 Quality Control
29(1)
4.3.2 Requirements Elicitation and Service Selection
30(1)
4.3.3 Capacity Planning and Deployment Optimisation
30(1)
4.4 Comparative Studies and Systems Research
30(1)
4.5 Organizational Process Proficiency
31(6)
Part II Benchmark Design
5 Design Objectives
37(10)
5.1 Requirements for Benchmarks
37(1)
5.2 Objectives of Benchmark Design
38(5)
5.2.1 General Design Objectives
38(3)
5.2.2 New Design Objectives in Cloud Service Benchmarking
41(2)
5.3 Resolving Conflicts between Design Objectives
43(4)
6 Quality Metrics and Measurement Methods
47(14)
6.1 What is a Quality Metric?
47(1)
6.2 Requirements for Quality Metrics
48(1)
6.3 Examples of Quality Metrics
49(6)
6.3.1 Performance Metrics
50(1)
6.3.2 Availability and Fault-Tolerance Metrics
51(1)
6.3.3 Data Consistency Metrics
52(1)
6.3.4 Elastic Scalability Metrics
53(2)
6.4 Cost
55(1)
6.5 Defining New Quality Metrics
56(2)
6.6 Measurement Methods
58(3)
7 Workloads
61(12)
7.1 Characterizing Workloads
61(1)
7.2 Synthetic vs. Trace-based Workloads
62(1)
7.3 Application-Driven vs. Micro-Benchmarks
63(1)
7.4 Open, Closed, and Partly-Open Workload Generation
64(1)
7.5 Scalability of Workload Generation
65(2)
7.5.1 Scaling Synthetic Workloads
66(1)
7.5.2 Scaling Trace-Based Workloads
67(1)
7.6 Comparing Workload Types and Generation Strategies
67(6)
Part III Benchmark Execution
8 Implementation Objectives and Challenges
73(12)
8.1 An Overview of Challenges in Benchmark Implementations
73(1)
8.2 Correctness
74(1)
8.3 Distribution
75(3)
8.4 Measurement Results Collection
78(2)
8.5 Reproducibility and Repeatability
80(1)
8.6 Portability
81(1)
8.7 Ease-of-Use
82(3)
9 Experiment Setup and Runtime
85(16)
9.1 An Experiment Setup and Execution Process
85(1)
9.2 Resource Provisioning, Deployment, and Configuration
86(3)
9.3 Experiment Preparation
89(1)
9.4 Experiment Runtime
90(1)
9.5 Data Collection
90(2)
9.6 Data Provenance
92(1)
9.7 Data Storage
93(2)
9.8 Runtime Cleanup
95(6)
Part IV Benchmark Results
10 Turning Data into Insights
101(12)
10.1 A Process for Gaining Insights
101(2)
10.2 Exploratory Data Analysis
103(2)
10.3 Confirmatory Data Analysis
105(2)
10.4 Data Analysis Tools
107(6)
10.4.1 Spreadsheet Software
107(1)
10.4.2 Databases and Data Warehouses
108(1)
10.4.3 Programming Languages and Frameworks
109(1)
10.4.4 Distributed Data Processing Frameworks
110(3)
11 Data Preprocessing
113(10)
11.1 Characteristics of Benchmarking Data
113(3)
11.2 Data Selection
116(1)
11.3 Missing Values
117(1)
11.4 Resampling
118(3)
11.5 Data Transformation
121(2)
12 Data Analysis
123(14)
12.1 What is Data Analysis?
123(1)
12.2 Descriptive Data Aggregation
124(2)
12.3 Data Visualization
126(3)
12.3.1 Visualizing Absolute Values
126(2)
12.3.2 Visualizing Value Distribution
128(1)
12.3.3 Visualizing Relationships
129(1)
12.4 Advanced Analysis Methods
129(8)
12.4.1 Finding Patterns
130(3)
12.4.2 Confirming Assumptions
133(1)
12.4.3 Making Predictions
133(4)
13 Using Insights on Cloud Service Quality
137(14)
13.1 Communicating Insights
137(2)
13.2 Acting based on Insights
139(12)
13.2.1 Consumption Decisions
139(2)
13.2.2 Service Configuration
141(1)
13.2.3 Application Design
142(9)
Part V Conclusions
14 Getting Started in Cloud Service Benchmarking
151(4)
14.1 How to Read this
Chapter
151(1)
14.2 Benchmarking Storage Services
151(1)
14.3 Benchmarking Virtual Machines
152(1)
14.4 Benchmarking Other Cloud Services
153(2)
15 Conclusion
155(4)
15.1 The Importance of Cloud Service Benchmarking
155(1)
15.2 Summary of this Book's Content
156(3)
References and Web Links 159(1)
References 159(5)
Web Links 164(3)
List of Abbreviations 167
David Bermbach is a Senior Researcher in the Information Systems Engineering Research Group at TU Berlin, Germany. His main research interests are in cloud and fog computing with a focus on distributed data management. David has a PhD in computer science and a diploma in business engineering both from Karlsruhe Institute of Technology, Germany. Erik Wittern is a Research Staff Member at the IBM Thomas J. Watson Research Center in New York, USA. His research interests are in the areas of cloud computing and software engineering - specifically, he is concerned with supporting developers in providing and consuming web APIs. Prior to joining IBM Research, Erik completed his PhD in computer science at the Karlsruhe Institute of Technology, Germany. Stefan Tai is Professor and Head of Chair Information Systems Engineering at TU Berlin, Germany. His principal research interests are in the areas of cloudcomputing, distributed systems, and software service engineering. Prior to his appointment in Berlin, Stefan was a Professor at Karlsruhe Institute of Technology and a Research Staff Member at the IBM Thomas J. Watson Research Center in New York, USA.