|
|
1 | (10) |
|
|
1 | (2) |
|
|
3 | (1) |
|
|
4 | (1) |
|
1.4 Motivations and Significances of the Project |
|
|
4 | (5) |
|
|
9 | (2) |
|
|
11 | (24) |
|
|
11 | (20) |
|
2.1.1 Scale of Things---Peta and Exascale |
|
|
11 | (1) |
|
2.1.2 Current Resource Availability |
|
|
12 | (1) |
|
2.1.3 High Performance Computing |
|
|
13 | (1) |
|
|
14 | (5) |
|
2.1.5 Existing HPC Implementation Architecture |
|
|
19 | (5) |
|
2.1.6 Significance of HPC Clusters on Hybrid Cloud |
|
|
24 | (1) |
|
2.1.7 Hybrid Cloud Future Improvements |
|
|
25 | (2) |
|
2.1.8 HPC + Cloud Contrasted with Existing Implementation Architectures |
|
|
27 | (1) |
|
2.1.9 Profit-Based Scheduling |
|
|
28 | (1) |
|
2.1.10 Preemptable Scheduling |
|
|
29 | (1) |
|
2.1.11 Priority-Based Scheduling |
|
|
30 | (1) |
|
2.2 Various Aspects and Current Issues |
|
|
31 | (4) |
|
2.2.1 Service Provisioning |
|
|
31 | (1) |
|
|
32 | (1) |
|
|
32 | (1) |
|
|
33 | (2) |
|
|
35 | (12) |
|
3.1 Cloud Benchmarking Instance Specifications and Assumptions |
|
|
35 | (1) |
|
3.2 Classic Benchmark Test Categories |
|
|
36 | (1) |
|
3.2.1 Dhrystone Benchmark |
|
|
36 | (1) |
|
|
36 | (1) |
|
|
37 | (1) |
|
3.2.4 Whetstone Benchmark |
|
|
37 | (1) |
|
3.3 Disk, USB and LAN Benchmarks |
|
|
37 | (1) |
|
3.4 Multithreading Benchmarks |
|
|
38 | (4) |
|
|
39 | (1) |
|
3.4.2 Whetstone Benchmark |
|
|
39 | (1) |
|
|
40 | (1) |
|
3.4.4 MP Memory Speed Tests |
|
|
40 | (1) |
|
3.4.5 MP Memory Bus Speed Tests |
|
|
40 | (1) |
|
3.4.6 MP Memory Random Access Speed Benchmark |
|
|
40 | (2) |
|
3.5 OpenMP Benchmarks for Parallel Processing Performance |
|
|
42 | (1) |
|
|
42 | (1) |
|
3.5.2 Original Open MP Benchmark |
|
|
42 | (1) |
|
3.6 Memory BusSpeed Benchmark |
|
|
42 | (5) |
|
|
42 | (1) |
|
3.6.2 Random Memory Benchmark |
|
|
43 | (1) |
|
|
43 | (4) |
|
4 Computation of Large Datasets |
|
|
47 | (18) |
|
4.1 Challenges and Considerations in Computing Large-Scale Data |
|
|
48 | (2) |
|
4.1.1 Extreme Parallelism and Heterogeneity |
|
|
48 | (1) |
|
4.1.2 Data Transfer Costs |
|
|
48 | (1) |
|
4.1.3 Increased Failure Rates |
|
|
49 | (1) |
|
4.1.4 Power Requirements at Exascale |
|
|
49 | (1) |
|
4.1.5 Memory Requirements at Exascale |
|
|
50 | (1) |
|
4.2 Work Done to Enable Computation of Large Datasets |
|
|
50 | (13) |
|
4.2.1 OpenStack and Azure Implementation |
|
|
51 | (5) |
|
4.2.2 Cloud Interconnectivity |
|
|
56 | (2) |
|
|
58 | (1) |
|
4.2.4 Simulations and Tools for Large Scale Data Processing |
|
|
59 | (4) |
|
4.3 Summary, Discussion and Future Directions |
|
|
63 | (2) |
|
4.3.1 What Might the Model Look like? |
|
|
63 | (1) |
|
4.3.2 Application Primitives---Key to Performance |
|
|
64 | (1) |
|
5 Optimized Online Scheduling Algorithms |
|
|
65 | (18) |
|
5.1 Dynamic Task Splitting Allocator (DTSA) |
|
|
65 | (2) |
|
5.2 Procedural Parallel Scheduling Heuristic (PPSH) |
|
|
67 | (8) |
|
|
68 | (2) |
|
|
70 | (3) |
|
|
73 | (2) |
|
|
75 | (1) |
|
5.3 Scalable Parallel Scheduling Heuristic (SPSH) |
|
|
75 | (4) |
|
|
79 | (4) |
|
|
83 | (8) |
|
|
83 | (2) |
|
6.1.1 Algorithm 1: Dynamic Task Splitting Allocator (DTSA) |
|
|
83 | (1) |
|
6.1.2 Algorithm 2: PPSH Phase 1---Mapping Phase |
|
|
83 | (1) |
|
6.1.3 Algorithm 3: PPSH Phase 2---Shuffling Phase |
|
|
84 | (1) |
|
6.1.4 Algorithm 4: PPSH Phase 3---Eliminating Phase |
|
|
84 | (1) |
|
6.1.5 Algorithm 5: PPSH Phase 4---Finishing Phase |
|
|
84 | (1) |
|
6.1.6 Algorithm 6: Scalable Parallel Scheduling Heuristic (SPSH) |
|
|
84 | (1) |
|
|
85 | (6) |
|
7 Conclusion and Future Works |
|
|
91 | (2) |
Bibliography |
|
93 | |