Muutke küpsiste eelistusi

Approximate Dynamic Programming for Dynamic Vehicle Routing 1st ed. 2017 [Kõva köide]

  • Formaat: Hardback, 197 pages, kõrgus x laius: 235x155 mm, kaal: 4616 g, 6 Illustrations, color; 49 Illustrations, black and white; XXV, 197 p. 55 illus., 6 illus. in color., 1 Hardback
  • Sari: Operations Research/Computer Science Interfaces Series 61
  • Ilmumisaeg: 27-Apr-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319555103
  • ISBN-13: 9783319555102
Teised raamatud teemal:
  • Kõva köide
  • Hind: 132,08 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 155,39 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 197 pages, kõrgus x laius: 235x155 mm, kaal: 4616 g, 6 Illustrations, color; 49 Illustrations, black and white; XXV, 197 p. 55 illus., 6 illus. in color., 1 Hardback
  • Sari: Operations Research/Computer Science Interfaces Series 61
  • Ilmumisaeg: 27-Apr-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319555103
  • ISBN-13: 9783319555102
Teised raamatud teemal:
This book provides a straightforward overview for every researcher interested in stochastic dynamic vehicle routing problems (SDVRPs). The book is written for both the applied researcher looking for suitable solution approaches for particular problems as well as for the theoretical researcher looking for effective and efficient methods of stochastic dynamic optimization and approximate dynamic programming (ADP). To this end, the book contains two parts. In the first part, the general methodology required for modeling and approaching SDVRPs is presented. It presents adapted and new, general anticipatory methods of ADP tailored to the needs of dynamic vehicle routing.  Since stochastic dynamic optimization is often complex and may not always be intuitive on first glance, the author accompanies the ADP-methodology with illustrative examples from the field of SDVRPs.

The second part of this book then depicts the application of the theory to a specific SDVRP. The process starts from the real-world application. The author describes a SDVRP with stochastic customer requests often addressed in the literature,  and then shows in detail how this problem can be modeled as a Markov decision process and presents several anticipatory solution approaches based on ADP. In an extensive computational study, he shows the advantages of the presented approaches compared to conventional heuristics. To allow deep insights in the functionality of ADP, he presents a comprehensive analysis of the ADP approaches.

1 Introduction
1(14)
1.1 Prescriptive Analytics
3(1)
1.2 Scope of This Work
4(1)
1.3 Outline of the Following
Chapters
4(2)
1.4 A Recipe for ADP in SDVRPs
6(9)
1.4.1 The Application
6(1)
1.4.2 The Model
7(1)
1.4.3 Anticipatory Approaches
8(7)
Part I Dynamic Vehicle Routing
2 Rich Vehicle Routing: Environment
15(10)
2.1 Vehicle Routing
15(1)
2.2 RVPR: Characteristics and Definition
16(1)
2.3 RVRPs in Logistics Management
17(1)
2.4 RVRPs in Hierarchical Decision Making
18(1)
2.5 Recent Developments of the RVRP-Environment
19(5)
2.5.1 E-Commerce and Globalization
19(1)
2.5.2 Urbanization and Demography
20(1)
2.5.3 Urban Environment and Municipal Regulations
21(1)
2.5.4 Technology
22(1)
2.5.5 Data and Forecasting
22(2)
2.6 Implications
24(1)
3 Rich Vehicle Routing: Applications
25(16)
3.1 General RVRP-Entities
26(1)
3.1.1 Infrastructure
26(1)
3.1.2 Vehicles
26(1)
3.1.3 Customers
27(1)
3.2 Plans
27(1)
3.3 Objectives
28(1)
3.3.1 Costs
28(1)
3.3.2 Reliability
28(1)
3.3.3 Objective Measures
29(1)
3.4 Constraints
29(1)
3.4.1 Time Windows
29(1)
3.4.2 Working Hours
29(1)
3.4.3 Capacities
30(1)
3.5 Drivers of Uncertainty
30(1)
3.5.1 Travel Times
30(1)
3.5.2 Service Times
30(1)
3.5.3 Demands
31(1)
3.5.4 Requests
31(1)
3.6 Classification
31(1)
3.7 Service Vehicles
32(2)
3.8 Transportation Vehicles
34(3)
3.8.1 Passenger Transportation
34(1)
3.8.2 Transportation of Goods
35(2)
3.9 Implications
37(4)
3.9.1 Decision Support
37(1)
3.9.2 Modeling of Planning Situations
38(1)
3.9.3 Modeling of Uncertainty
38(1)
3.9.4 Modeling of Subsequent Planning
38(1)
3.9.5 Modeling of Applications
39(1)
3.9.6 Modeling of Anticipation
39(1)
3.9.7 Anticipatory Methods
39(2)
4 Modeling
41(22)
4.1 Stochastic Dynamic Decision Problem
41(2)
4.1.1 Dynamic Decision Problems
42(1)
4.2 Markov Decision Process
43(2)
4.2.1 Definition
43(1)
4.2.2 Decision Policies and Problem Realizations
44(1)
4.3 Stochastic Dynamic Vehicle Routing
45(1)
4.4 Modeling Planning Situations
46(3)
4.4.1 Decision State
46(2)
4.4.2 Decision Making
48(1)
4.5 Modeling Uncertainty
49(5)
4.5.1 Deterministic Modeling
49(1)
4.5.2 Travel Time
49(2)
4.5.3 Service Time
51(1)
4.5.4 Demands
51(1)
4.5.5 Requests
52(2)
4.5.6 Stochastic Transitions in SDVRPs
54(1)
4.6 Modeling SDVRPs as MDPs
54(5)
4.6.1 Decision Points
55(1)
4.6.2 Travel Times
55(2)
4.6.3 Service Times
57(1)
4.6.4 Demands
57(1)
4.6.5 Requests
58(1)
4.7 Vehicle Routing with Recourse Actions
59(1)
4.8 Route-Based Markov Decision Process
59(1)
4.9 Implications
60(3)
4.9.1 Properties of SDVRP
60(1)
4.9.2 Definition, Reconstruction, and Simulation
60(1)
4.9.3 Anticipation and Prescriptive Analytics
61(2)
5 Anticipation
63(8)
5.1 Definition
63(1)
5.2 Anticipation in SDVRPs
64(1)
5.3 Perfect Anticipation
65(2)
5.3.1 Optimal Policies
65(1)
5.3.2 Derivation of Optimal Policies
66(1)
5.3.3 Limitations
67(1)
5.4 Classification of Anticipation
67(2)
5.4.1 Reactive Versus Non-reactive
68(1)
5.4.2 Implicit, Explicit, and Perfect
68(1)
5.4.3 Focus of Anticipation: Offline and Online
68(1)
5.5 Reactive Explicit Anticipation
69(2)
6 Anticipatory Solution Approaches
71(32)
6.1 Non-reactive Anticipation
71(1)
6.1.1 Non-reactive Implicit Anticipation
71(1)
6.1.2 Non-reactive Explicit Anticipation
72(1)
6.2 Reactive Anticipation
72(4)
6.2.1 Reactive Implicit Anticipation
73(1)
6.2.2 Reactive Explicit Anticipation
74(1)
6.2.3 Approximate Dynamic Programming
74(1)
6.2.4 Reducing the SDP
75(1)
6.2.5 Resulting Approaches
76(1)
6.3 Lookahead and Rollout Algorithm
76(8)
6.3.1 Functionality
78(1)
6.3.2 Efficient Computing: Indifference Zone Selection
78(6)
6.4 Value Function Approximation
84(1)
6.5 Approximate Value Iteration
85(11)
6.5.1 Post-decision State Space Representation
87(1)
6.5.2 Aggregation
88(1)
6.5.3 Partitioning: Lookup Table
88(1)
6.5.4 Efficient Approximation Versus Effective Decision Making
89(2)
6.5.5 Equidistant Lookup Table
91(1)
6.5.6 Weighted Lookup Table
91(1)
6.5.7 Dynamic Lookup Table
92(4)
6.6 Hybrid Reactive Explicit Anticipation
96(7)
6.6.1 Motivation
97(1)
6.6.2 Hybrid Rollout Algorithm
97(1)
6.6.3 Example: Comparison of Online and Hybrid RAs
98(5)
7 Literature Classification
103(14)
7.1 Classification
104(1)
7.2 Travel Times
104(2)
7.3 Service Times
106(1)
7.4 Demands
106(1)
7.5 Requests
107(2)
7.6 Analysis
109(3)
7.6.1 Time Distribution
109(1)
7.6.2 Problem
110(2)
7.6.3 Approaches
112(1)
7.7 Implications
112(5)
Part II Stochastic Customer Requests
8 Motivation
117(6)
8.1 Application
117(2)
8.2 Replanning and Anticipation
119(3)
8.3 Outline
122(1)
9 SDVRP with Stochastic Requests
123(8)
9.1 Problem Statement
123(2)
9.2 Markov Decision Process Formulation
125(2)
9.3 Literature Review
127(4)
10 Solution Algorithms
131(16)
10.1 Routing and Sequencing Decisions
132(2)
10.1.1 Subset Selection
132(1)
10.1.2 Cheapest Insertion
133(1)
10.1.3 Improvements
134(1)
10.2 Myopic Policy
134(1)
10.3 Non-reactive Implicit: Waiting Policies
135(1)
10.4 Non-reactive Explicit: Anticipatory Insertion
135(1)
10.5 Non-reactive Explicit: Cost Benefit
136(1)
10.6 Offline Reactive Explicit: ATB
137(6)
10.6.1 Aggregation and Partitioning
137(5)
10.6.2 Extending the AVI-Vector Space
142(1)
10.7 Online Reactive Explicit: Ad Hoc Sampling
143(2)
10.8 Online/Hybrid Reactive Explicit: Rollout Algorithm
145(2)
10.8.1 Myopic-Based Rollout Algorithm
145(1)
10.8.2 ATB-Based Rollout Algorithm
146(1)
11 Computational Evaluation
147(30)
11.1 Instances
147(2)
11.2 Parameter Tuning
149(2)
11.2.1 Non-reactive
149(1)
11.2.2 Reactive
150(1)
11.3 Non-reactive Versus Offline Reactive
151(3)
11.4 Offline Reactive Anticipation
154(12)
11.4.1 Routing and Subset Selection
161(2)
11.4.2 Budgeting Time
163(3)
11.5 Online Reactive Anticipation
166(10)
11.5.1 Online Versus Offline Anticipation
167(1)
11.5.2 Runtime
167(1)
11.5.3 Sample Runs
168(3)
11.5.4 Indifference Zone Selection
171(2)
11.5.5 Hybrid Anticipation
173(1)
11.5.6 Spatial Versus Temporal Anticipation
174(2)
11.6 Implications
176(1)
12 Conclusion and Outlook
177(6)
12.1 Summary
177(1)
12.2 Managerial Implications
178(1)
12.3 Future Research
179(4)
12.3.1 Application Fields
179(1)
12.3.2 Reactive Anticipation for SDVRPs
180(3)
References 183(12)
Index 195
Marlin Ulmer is a Graduate in Mathematics and owns a Doctorate Degree in Economics. He is currently a Research Associate at the Carl-Friedrich Gauß Department of the Technische Universität Braunschweig in Germany. His main research field is Prescriptive Analytics in Transportation. His particular research interests are Vehicle Routing, Stochastic Optimization, and Approximate Dynamic Programming.