Editor Biographies |
|
xvii | |
Contributors |
|
xxi | |
Foreword |
|
xxvii | |
Preface |
|
xxix | |
|
|
1 | (20) |
|
Christopher D. Kiekintveld |
|
|
|
|
|
1.1 Artificial Intelligence and Cybersecurity |
|
|
1 | (4) |
|
1.1.1 Game Theory for Cybersecurity |
|
|
2 | (2) |
|
1.1.2 Machine Learning for Cybersecurity |
|
|
4 | (1) |
|
|
5 | (13) |
|
|
18 | (3) |
Part I Game Theory for Cyber Deception |
|
21 | (92) |
|
2 Introduction to Game Theory |
|
|
23 | (24) |
|
|
|
|
|
Christopher D. Kiekintveld |
|
|
|
|
23 | (1) |
|
2.2 Example Two-Player Zero-Sum Games |
|
|
24 | (3) |
|
|
27 | (5) |
|
|
28 | (4) |
|
|
32 | (3) |
|
|
34 | (1) |
|
|
35 | (3) |
|
|
35 | (1) |
|
2.5.2 Stackelberg Security Games |
|
|
36 | (1) |
|
2.5.3 Applications in Cybersecurity |
|
|
37 | (1) |
|
|
38 | (1) |
|
|
38 | (1) |
|
2.6.2 Applications in Cybersecurity |
|
|
39 | (1) |
|
|
39 | (1) |
|
|
39 | (1) |
|
2.7.2 Applications in Cybersecurity |
|
|
40 | (1) |
|
|
40 | (2) |
|
|
41 | (1) |
|
2.8.2 Applications in Cybersecurity |
|
|
42 | (1) |
|
|
42 | (5) |
|
3 Scalable Algorithms for Identifying Stealthy Attackers in a Game-Theoretic Framework Using Deception |
|
|
47 | (15) |
|
|
|
|
|
|
Christopher D. Kiekintveld |
|
|
|
47 | (1) |
|
|
48 | (1) |
|
|
49 | (2) |
|
3.3.1 Case Study 1: Attackers with Same Exploits but Different Goals |
|
|
50 | (1) |
|
3.3.2 Case Study 2: Attackers with Shared Exploits and Different Goals |
|
|
51 | (1) |
|
3.3.3 Case Study 3: Attackers with Shared Exploits but Same Goals |
|
|
51 | (1) |
|
|
51 | (3) |
|
3.5 Defender Decision Making |
|
|
54 | (1) |
|
3.6 Attacker Decision Making |
|
|
54 | (1) |
|
|
55 | (2) |
|
|
57 | (2) |
|
|
58 | (1) |
|
3.9 Evaluation of Heuristics |
|
|
59 | (1) |
|
3.10 Conclusions and Future Direction |
|
|
60 | (1) |
|
|
60 | (2) |
|
4 Honeypot Allocation Games over Attack Graphs for Cyber Deception |
|
|
62 | (15) |
|
|
|
|
Christopher D. Kiekintveld |
|
|
|
62 | (1) |
|
4.2 System and Game Model |
|
|
63 | (5) |
|
|
63 | (1) |
|
4.2.2 General Game Formulation |
|
|
64 | (1) |
|
|
64 | (1) |
|
|
65 | (1) |
|
|
65 | (1) |
|
|
66 | (1) |
|
|
67 | (1) |
|
4.3 Allocating t Honeypots Model |
|
|
68 | (1) |
|
|
68 | (1) |
|
4.4 Dynamic Honeypot Allocation |
|
|
69 | (2) |
|
4.4.1 Mixed Strategy, State Evolution, and Objective Function |
|
|
69 | (1) |
|
|
70 | (1) |
|
|
71 | (3) |
|
4.6 Conclusion and Future Work |
|
|
74 | (1) |
|
|
75 | (1) |
|
|
75 | (2) |
|
5 Evaluating Adaptive Deception Strategies for Cyber Defense with Human Adversaries |
|
|
77 | (20) |
|
|
|
Christopher D. Kiekintveld |
|
|
|
|
|
77 | (2) |
|
5.1.1 HoneyGame: An Abstract Interactive Game to Study Deceptive Cyber Defense |
|
|
78 | (1) |
|
5.2 An Ecology of Defense Algorithms |
|
|
79 | (5) |
|
5.2.1 Static Pure Defender |
|
|
80 | (1) |
|
5.2.2 Static Equilibrium Defender |
|
|
80 | (1) |
|
5.2.3 Learning with Linear Rewards (LLR) |
|
|
81 | (1) |
|
5.2.4 Best Response with Thompson sampling (BR-TS) |
|
|
81 | (2) |
|
5.2.5 Probabilistic Best Response with Thompson Sampling (PBR-TS) |
|
|
83 | (1) |
|
5.2.6 Follow the Regularized Leader (FTRL) |
|
|
83 | (1) |
|
|
84 | (1) |
|
|
84 | (1) |
|
|
84 | (5) |
|
|
84 | (1) |
|
|
84 | (1) |
|
|
85 | (1) |
|
|
85 | (1) |
|
5.4.3.2 Attacks on Honeypots |
|
|
85 | (1) |
|
5.4.3.3 Switching Behavior |
|
|
86 | (1) |
|
5.4.3.4 Attack Distribution |
|
|
88 | (1) |
|
|
89 | (5) |
|
|
89 | (1) |
|
|
90 | (1) |
|
|
90 | (1) |
|
5.5.2.2 Attacks on Honeypots |
|
|
90 | (1) |
|
5.5.2.3 Switching Behavior |
|
|
91 | (1) |
|
5.5.2.4 Attack Distribution |
|
|
93 | (1) |
|
5.6 Towards Adaptive and Personalized Defense |
|
|
94 | (1) |
|
|
94 | (1) |
|
|
95 | (1) |
|
|
95 | (2) |
|
6 A Theory of Hypergames on Graphs for Synthesizing Dynamic Cyber Defense with Deception |
|
|
97 | (16) |
|
|
|
|
97 | (2) |
|
6.2 Attack-Defend Games on Graph |
|
|
99 | (3) |
|
|
99 | (1) |
|
6.2.2 Specifying the Security Properties in LTL |
|
|
100 | (2) |
|
|
102 | (1) |
|
6.4 Synthesis of Provably Secure Defense Strategies Using Hypergames on Graphs |
|
|
103 | (5) |
|
6.4.1 Synthesis of Reactive Defense Strategies |
|
|
103 | (2) |
|
6.4.2 Synthesis of Reactive Defense Strategies with Cyber Deception |
|
|
105 | (3) |
|
|
108 | (2) |
|
|
110 | (1) |
|
|
111 | (2) |
Part II Game Theory for Cyber Security |
|
113 | (118) |
|
7 Minimax Detection (MAD) for Computer Security: A Dynamic Program Characterization |
|
|
115 | (22) |
|
|
|
|
|
|
|
115 | (3) |
|
7.1.1 Need for Cohesive Detection |
|
|
116 | (1) |
|
7.1.2 Need for Strategic Detection |
|
|
116 | (1) |
|
7.1.3 Minimax Detection (MAD) |
|
|
117 | (1) |
|
|
118 | (4) |
|
|
119 | (1) |
|
|
120 | (1) |
|
|
121 | (1) |
|
|
121 | (1) |
|
|
122 | (4) |
|
7.3.1 Complexity Analysis |
|
|
126 | (1) |
|
7.4 Illustrative Examples |
|
|
126 | (2) |
|
|
128 | (2) |
|
|
130 | (1) |
|
7.A Generalization of the Results |
|
|
130 | (4) |
|
|
134 | (3) |
|
8 Sensor Manipulation Games in Cyber Security |
|
|
137 | (12) |
|
|
|
137 | (1) |
|
8.2 Measurement Manipulation Games |
|
|
138 | (6) |
|
8.2.1 Saddle-Point Equilibria |
|
|
139 | (3) |
|
8.2.2 Approximate Saddle-Point Equilibrium |
|
|
142 | (2) |
|
|
144 | (3) |
|
|
145 | (2) |
|
8.4 Conclusions and Future Work |
|
|
147 | (1) |
|
|
148 | (1) |
|
9 Adversarial Gaussian Process Regression in Sensor Networks |
|
|
149 | (11) |
|
|
|
|
|
149 | (1) |
|
|
150 | (1) |
|
9.3 Anomaly Detection with Gaussian Process Regression |
|
|
150 | (1) |
|
9.4 Stealthy Attacks on Gaussian Process Anomaly Detection |
|
|
151 | (2) |
|
9.5 The Resilient Anomaly Detection System |
|
|
153 | (3) |
|
9.5.1 Resilient Anomaly Detection as a Stackelberg Game |
|
|
154 | (1) |
|
9.5.2 Computing an Approximately Optimal Defense |
|
|
155 | (1) |
|
|
156 | (2) |
|
|
158 | (1) |
|
|
158 | (2) |
|
10 Moving Target Defense Games for Cyber Security: Theory and Applications |
|
|
160 | (20) |
|
|
|
|
160 | (2) |
|
10.2 Moving Target Defense Theory |
|
|
162 | (3) |
|
10.2.1 Game Theory for MTD |
|
|
163 | (2) |
|
10.3 Single-Controller Stochastic Games for Moving Target Defense |
|
|
165 | (3) |
|
|
165 | (2) |
|
10.3.2 Single-Controller Stochastic Games |
|
|
167 | (1) |
|
10.3.2.1 Numerical Example |
|
|
167 | (1) |
|
10.4 A Case Study for Applying Single-Controller Stochastic Games in MTD |
|
|
168 | (6) |
|
10.4.1 Equilibrium Strategy Determination |
|
|
170 | (1) |
|
10.4.2 Simulation Results and Analysis |
|
|
171 | (3) |
|
10.5 Moving Target Defense Applications |
|
|
174 | (2) |
|
10.5.1 Internet of Things (IoT) Applications |
|
|
174 | (1) |
|
10.5.2 Machine Learning Applications |
|
|
175 | (1) |
|
10.5.3 Prospective MTD Applications |
|
|
175 | (1) |
|
|
176 | (1) |
|
|
177 | (3) |
|
11 Continuous Authentication Security Games |
|
|
180 | (24) |
|
|
|
|
|
|
180 | (2) |
|
11.2 Background and Related Work |
|
|
182 | (1) |
|
|
182 | (5) |
|
|
183 | (1) |
|
11.3.2 Intrusion Detection System Model |
|
|
183 | (1) |
|
11.3.3 Model of Continuous Authentication |
|
|
183 | (1) |
|
11.3.4 System States without an Attacker |
|
|
184 | (1) |
|
|
185 | (1) |
|
11.3.5.1 Listening (1(t) = r, a(t) = 0) |
|
|
186 | (1) |
|
11.3.5.2 Attacking (1(t) = 0, a(t) = r) |
|
|
186 | (1) |
|
11.3.5.3 Waiting (1(t) = 0, a(t) = 0) |
|
|
186 | (1) |
|
11.3.6 Continuous Authentication Game |
|
|
187 | (1) |
|
11.4 Optimal Attack Strategy under Asymmetric Information |
|
|
187 | (8) |
|
|
187 | (1) |
|
11.4.1.1 Waiting (1(t) = 0, a(t) = 0) |
|
|
187 | (1) |
|
11.4.1.2 Listening (/(t) = r, a(t) = 0) |
|
|
188 | (1) |
|
11.4.1.3 Attacking (1(t) = 0, a(t) = r) |
|
|
188 | (1) |
|
11.4.2 Optimality of the Threshold Policy |
|
|
189 | (1) |
|
11.4.2.1 Optimality of Listening |
|
|
192 | (1) |
|
11.4.2.2 Optimality of Attacking |
|
|
193 | (2) |
|
11.5 Optimal Defense Strategy |
|
|
195 | (2) |
|
11.5.1 Expected Defender Utility |
|
|
195 | (1) |
|
11.5.2 Analysis without an Attacker |
|
|
195 | (2) |
|
11.5.3 Analysis with an Attacker |
|
|
197 | (1) |
|
|
197 | (3) |
|
11.7 Conclusion and Discussion |
|
|
200 | (2) |
|
|
202 | (2) |
|
12 Cyber Autonomy in Software Security: Techniques and Tactics |
|
|
204 | (27) |
|
|
|
|
204 | (1) |
|
|
205 | (1) |
|
|
206 | (1) |
|
|
207 | (3) |
|
|
210 | (2) |
|
|
212 | (10) |
|
|
212 | (1) |
|
|
213 | (2) |
|
12.6.3 Finding Equilibriums |
|
|
215 | (7) |
|
|
222 | (1) |
|
|
222 | (4) |
|
|
226 | (1) |
|
|
226 | (1) |
|
|
227 | (4) |
Part III Adversarial Machine Learning for Cyber Security |
|
231 | (104) |
|
13 A Game Theoretic Perspective on Adversarial Machine Learning and Related Cybersecurity Applications |
|
|
233 | (37) |
|
|
|
|
13.1 Introduction to Game Theoretic Adversarial Machine Learning |
|
|
233 | (1) |
|
13.2 Adversarial Learning Problem Definition |
|
|
234 | (1) |
|
13.3 Game Theory in Adversarial Machine Learning |
|
|
235 | (3) |
|
13.3.1 Simultaneous Games |
|
|
235 | (1) |
|
|
236 | (1) |
|
13.3.1.2 Nash Equilibrium Games |
|
|
236 | (1) |
|
|
237 | (1) |
|
13.4 Simultaneous Zero-sum Games in Real Applications |
|
|
238 | (11) |
|
13.4.1 Adversarial Attack Models |
|
|
239 | (1) |
|
13.4.1.1 Free-Range Attack |
|
|
239 | (1) |
|
13.4.1.2 Restrained Attack |
|
|
239 | (2) |
|
13.4.2 Adversarial SVM Learning |
|
|
241 | (1) |
|
13.4.2.1 AD-SVM Against Free-range Attack Model |
|
|
241 | (1) |
|
13.4.2.2 AD-SVM Against Restrained Attack Model |
|
|
242 | (2) |
|
|
244 | (1) |
|
13.4.3.1 Attack Simulation |
|
|
244 | (1) |
|
13.4.3.2 Experimental Results |
|
|
244 | (1) |
|
13.4.3.3 A Few Words on Setting Cf, and C6 |
|
|
246 | (2) |
|
|
248 | (1) |
|
13.5 Nested Bayesian Stackelberg Games |
|
|
249 | (17) |
|
13.5.1 Adversarial Learning |
|
|
249 | (1) |
|
13.5.2 A Single Leader Single Follower Stackelberg Game |
|
|
250 | (1) |
|
13.5.3 Learning Models and Adversary Types |
|
|
251 | (1) |
|
|
251 | (1) |
|
|
253 | (1) |
|
13.5.3.3 Setting Payoff Matrices for the Single Leader Multiple-followers Game |
|
|
253 | (2) |
|
13.5.4 A Single Leader Multi-followers Stackelberg Game |
|
|
255 | (1) |
|
|
256 | (1) |
|
13.5.5.1 Artificial Datasets |
|
|
257 | (1) |
|
|
262 | (4) |
|
|
266 | (1) |
|
|
266 | (1) |
|
|
267 | (1) |
|
|
267 | (3) |
|
14 Adversarial Machine Learning for 5G Communications Security |
|
|
270 | (19) |
|
|
|
|
|
270 | (3) |
|
14.2 Adversarial Machine Learning |
|
|
273 | (2) |
|
14.3 Adversarial Machine Learning in Wireless Communications |
|
|
275 | (4) |
|
14.3.1 Wireless Attacks Built Upon Adversarial Machine Learning |
|
|
275 | (2) |
|
14.3.2 Domain-specific Challenges for Adversarial Machine Learning in Wireless Communications |
|
|
277 | (1) |
|
14.3.3 Defense Schemes Against Adversarial Machine Learning |
|
|
278 | (1) |
|
14.4 Adversarial Machine Learning in 5G Communications |
|
|
279 | (5) |
|
14.4.1 Scenario 1-Adversarial Attack on 5G Spectrum Sharing |
|
|
279 | (1) |
|
|
279 | (1) |
|
14.4.1.2 Simulation Setup and Performance Results |
|
|
279 | (2) |
|
14.4.2 Scenario 2-Adversarial Attack on Signal Authentication in Network Slicing |
|
|
281 | (1) |
|
|
281 | (1) |
|
14.4.2.2 Simulation Setup and Performance Results |
|
|
282 | (1) |
|
14.4.3 Defense Against Adversarial Machine Learning in 5G Communications |
|
|
283 | (1) |
|
|
284 | (1) |
|
|
284 | (5) |
|
15 Machine Learning in the Hands of a Malicious Adversary: A Near Future If Not Reality |
|
|
289 | (28) |
|
|
|
|
|
|
|
|
|
289 | (1) |
|
15.2 AI-driven Advanced Targeted Attacks |
|
|
290 | (6) |
|
15.2.1 Advanced Targeted Attacks |
|
|
290 | (1) |
|
15.2.2 Motivation for Adapting ML method in Malware |
|
|
291 | (2) |
|
15.2.3 AI to Flesh Out the Details of What, When, and How to Attack from Internal Reconnaissance |
|
|
293 | (1) |
|
15.2.3.1 What to Attack: Confirming a Target System |
|
|
294 | (1) |
|
15.2.3.2 When to Attack: Determining the Time to Trigger an Attack Payload |
|
|
294 | (1) |
|
15.2.3.3 How to Attack: Devising the Attack Payload |
|
|
295 | (1) |
|
|
295 | (1) |
|
15.3 Inference of When to Attack: The Case of Attacking a Surgical Robot |
|
|
296 | (6) |
|
|
296 | (1) |
|
|
297 | (1) |
|
15.3.2.1 Attack Preparation |
|
|
297 | (1) |
|
15.3.2.2 Attack Strategy: ROS-specific MITM |
|
|
297 | (1) |
|
15.3.2.3 Trigger: Inference of Critical Time to Initiate the Malicious Payload |
|
|
298 | (1) |
|
15.3.2.4 Attack Payload: Fault Injection |
|
|
299 | (1) |
|
|
300 | (1) |
|
|
301 | (1) |
|
|
302 | (1) |
|
15.4 Inference of How to Attack: The Case of Attacking a Building Control System |
|
|
302 | (9) |
|
|
303 | (1) |
|
15.4.1.1 Computing Infrastructure: Blue Waters |
|
|
303 | (1) |
|
15.4.1.2 Cyber-physical System: NPCF Building Automation System |
|
|
304 | (1) |
|
|
304 | (1) |
|
|
304 | (3) |
|
|
307 | (1) |
|
|
308 | (2) |
|
|
310 | (1) |
|
15.5 Protection from Rising Threats |
|
|
311 | (1) |
|
|
312 | (1) |
|
|
313 | (1) |
|
|
313 | (4) |
|
16 Trinity: Trust, Resilience and Interpretability of Machine Learning Models |
|
|
317 | (18) |
|
|
|
|
|
|
317 | (2) |
|
16.2 Trust and Interpretability |
|
|
319 | (6) |
|
16.2.1 Formal Methods and Verification |
|
|
319 | (2) |
|
16.2.2 Top-down Analysis by Synthesis |
|
|
321 | (4) |
|
16.3 Resilience and Interpretability |
|
|
325 | (4) |
|
16.3.1 Manifold-based Defense |
|
|
326 | (2) |
|
16.3.2 Attribution-based Confidence Using Shapley Values |
|
|
328 | (1) |
|
|
329 | (1) |
|
|
330 | (5) |
Part IV Generative Models for Cyber Security |
|
335 | (32) |
|
17 Evading Machine Learning Based Network Intrusion Detection Systems with GANs |
|
|
337 | (20) |
|
|
|
|
|
|
337 | (1) |
|
|
338 | (2) |
|
17.2.1 Network Intrusion Detection Systems |
|
|
338 | (1) |
|
17.2.2 Adversarial Examples |
|
|
338 | (1) |
|
17.2.3 Generative Adversarial Networks |
|
|
339 | (1) |
|
17.2.4 Crafting Adversarial Examples Using GANs |
|
|
339 | (1) |
|
|
340 | (6) |
|
17.3.1 Target NIDS for Detecting Attack Traffic |
|
|
340 | (1) |
|
17.3.1.1 Discriminator and Generator Architectures |
|
|
342 | (1) |
|
|
342 | (1) |
|
|
343 | (1) |
|
17.3.4 The Attack Algorithm |
|
|
343 | (3) |
|
|
346 | (3) |
|
|
346 | (1) |
|
|
346 | (3) |
|
17.4.3 Transfer-based Attack |
|
|
349 | (1) |
|
|
349 | (2) |
|
|
349 | (2) |
|
17.A Network Traffic Flow Features |
|
|
351 | (6) |
|
18 Concealment Charm (ConcealGAN): Automatic Generation of Steganographic Text Using Generative Models to Bypass Censorship |
|
|
357 | (10) |
|
|
|
|
357 | (1) |
|
|
358 | (1) |
|
|
359 | (4) |
|
18.3.1 Previous Works Using Machine Learning Techniques |
|
|
359 | (1) |
|
18.3.2 High Level of Working Mechanism of ConcealGAN |
|
|
360 | (1) |
|
18.3.3 Double Layer of Encoding |
|
|
360 | (1) |
|
18.3.4 Compression of Data |
|
|
361 | (1) |
|
18.3.5 Embedding Algorithms |
|
|
362 | (1) |
|
|
362 | (1) |
|
|
362 | (1) |
|
|
363 | (1) |
|
18.5 Conclusion and Future Work |
|
|
363 | (1) |
|
|
364 | (3) |
Part V Reinforcement Learning for Cyber Security |
|
367 | (44) |
|
19 Manipulating Reinforcement Learning: Stealthy Attacks on Cost Signals |
|
|
369 | (20) |
|
|
|
19.1 Introduction of Reinforcement Learning |
|
|
369 | (5) |
|
|
370 | (2) |
|
|
372 | (2) |
|
|
374 | (1) |
|
19.2 Security Problems of Reinforcement Learning |
|
|
374 | (2) |
|
19.3 Reinforcement Learning with Manipulated Cost Signals |
|
|
376 | (8) |
|
19.3.1 TD Learning with Manipulated Cost Signals |
|
|
376 | (3) |
|
19.3.2 Q-Learning with Manipulated Cost Signals |
|
|
379 | (5) |
|
|
384 | (2) |
|
|
386 | (1) |
|
|
386 | (2) |
|
|
388 | (1) |
|
20 Resource-Aware Intrusion Response Based on Deep Reinforcement Learning for Software-Defined Internet-of-Battle-Things |
|
|
389 | (22) |
|
|
|
|
|
|
389 | (2) |
|
20.1.1 Motivation and Challenges |
|
|
389 | (1) |
|
|
389 | (1) |
|
20.1.3 Research Questions |
|
|
390 | (1) |
|
|
390 | (1) |
|
20.1.5 Structure of This Chapter |
|
|
391 | (1) |
|
|
391 | (2) |
|
20.2.1 Software-Defined Internet-of-Battle-Things |
|
|
391 | (1) |
|
20.2.2 Deep Reinforcement Learning |
|
|
392 | (1) |
|
20.2.3 Resource-Aware Defense Systems |
|
|
392 | (1) |
|
|
393 | (4) |
|
|
393 | (1) |
|
|
394 | (1) |
|
|
394 | (2) |
|
20.3.4 System Failure Condition |
|
|
396 | (1) |
|
|
397 | (1) |
|
20.4 The Proposed DRL-Based Resource-Aware Active Defense Framework |
|
|
397 | (5) |
|
20.4.1 MultiLayered Defense Network Structure |
|
|
398 | (1) |
|
20.4.2 DRL-Based Intrusion Response Strategies |
|
|
398 | (1) |
|
20.4.2.1 Intrusion Response Strategies |
|
|
398 | (1) |
|
20.4.2.2 Selection of an Intrusion Response Policy |
|
|
399 | (3) |
|
20.5 Experimental Setting |
|
|
402 | (1) |
|
|
402 | (1) |
|
|
403 | (1) |
|
20.6 Simulation Results & Analysis |
|
|
403 | (3) |
|
20.6.1 Effect of DRL-Based Intrusion Response Strategies on Accumulated Rewards |
|
|
403 | (1) |
|
20.6.2 Effect of DRL-Based Intrusion Response Strategies Under Varying Attack Severity (Pa) |
|
|
404 | (2) |
|
20.7 Conclusion & Future Work |
|
|
406 | (1) |
|
|
407 | (4) |
Part VI Other Machine Learning Approach to Cyber Security |
|
411 | (96) |
|
21 Smart Internet Probing: Scanning Using Adaptive Machine Learning |
|
|
413 | (25) |
|
|
|
|
|
413 | (2) |
|
|
415 | (3) |
|
21.2.1 Global Internet Scans |
|
|
415 | (1) |
|
|
416 | (1) |
|
|
416 | (2) |
|
|
418 | (2) |
|
21.3.1 Classification Algorithm |
|
|
418 | (1) |
|
21.3.2 Features for Model Training |
|
|
419 | (1) |
|
|
419 | (1) |
|
|
420 | (3) |
|
|
420 | (1) |
|
21.4.2 Sequential Scanning |
|
|
421 | (1) |
|
21.4.2.1 Finding an Optimal Scan Order |
|
|
422 | (1) |
|
21.4.2.2 Training the Sequence of Classifiers |
|
|
422 | (1) |
|
|
423 | (5) |
|
|
423 | (1) |
|
|
423 | (2) |
|
21.5.3 Sequential Scanning |
|
|
425 | (3) |
|
|
428 | (5) |
|
21.6.1 Comparison with Other Approaches |
|
|
428 | (2) |
|
21.6.2 Coverage on Vulnerable IP Addresses |
|
|
430 | (1) |
|
21.6.3 Keeping Models Up-to-Date |
|
|
431 | (2) |
|
|
433 | (1) |
|
|
433 | (1) |
|
21.8 Conclusions and Future Work |
|
|
434 | (1) |
|
|
435 | (1) |
|
|
435 | (3) |
|
22 Semi-automated Parameterization of a Probabilistic Model Using Logistic Regression-A Tutorial |
|
|
438 | (47) |
|
|
|
|
|
438 | (4) |
|
22.1.1 Context, Scope, and Notation |
|
|
440 | (1) |
|
22.1.2 Assumptions on Data Availability |
|
|
440 | (2) |
|
|
442 | (4) |
|
22.2.1 Exact Transition Models |
|
|
443 | (1) |
|
22.2.2 Meaning of Dependencies |
|
|
443 | (1) |
|
22.2.3 Dependency Descriptions and Information Exchange |
|
|
444 | (1) |
|
22.2.4 Local and Global Models |
|
|
445 | (1) |
|
22.3 Parameterization by Example |
|
|
446 | (4) |
|
22.3.1 Structure of Examples |
|
|
447 | (1) |
|
22.3.2 Modeling Recovery Events |
|
|
448 | (2) |
|
22.3.3 Constructing the Model Parameter Estimation Function |
|
|
450 | (1) |
|
22.4 Data Gathering and Preparation |
|
|
450 | (6) |
|
22.4.1 Public Sources of Data |
|
|
450 | (1) |
|
22.4.2 Getting Training Data |
|
|
451 | (4) |
|
22.4.3 Explorative Data Analysis |
|
|
455 | (1) |
|
22.5 Logistic Regression (LR)-Basics |
|
|
456 | (6) |
|
22.5.1 Handling Categorical and Missing Data |
|
|
457 | (1) |
|
22.5.1.1 Treatment of Missing Values |
|
|
458 | (1) |
|
22.5.1.2 Recommended Treatment of Missing Data |
|
|
459 | (3) |
|
22.6 Application of LR for Model Parameterization |
|
|
462 | (18) |
|
22.6.1 Step 1: Fitting the Regression Model |
|
|
463 | (1) |
|
22.6.1.1 Model Diagnostics and Plausibility Checks |
|
|
469 | (1) |
|
22.6.1.2 Choosing/Constructing Alternative Models |
|
|
472 | (2) |
|
22.6.2 Step 2: Apply the Regression for Batch Parameterization |
|
|
474 | (1) |
|
22.6.2.1 Parameterizing the Automaton with Incomplete Data |
|
|
476 | (1) |
|
22.6.2.2 Compiling the Results |
|
|
479 | (1) |
|
|
480 | (1) |
|
|
481 | (1) |
|
|
481 | (1) |
|
22.A.1 On Fuzzy Logic Methods and Copulas |
|
|
481 | (1) |
|
22.A.2 Using Fuzzy Logic to Estimate Parameters |
|
|
482 | (1) |
|
22.A.3 Using Neural Networks to Estimate Parameters |
|
|
482 | (1) |
|
|
482 | (3) |
|
23 Resilient Distributed Adaptive Cyber-Defense Using Blockchain |
|
|
485 | (14) |
|
|
|
|
485 | (3) |
|
23.2 Temporal Online Reinforcement Learning |
|
|
488 | (2) |
|
23.3 Spatial Online Reinforcement Learning |
|
|
490 | (2) |
|
23.4 Experimental Results |
|
|
492 | (3) |
|
23.5 Survivable Adaptive Distributed Systems |
|
|
495 | (1) |
|
23.6 Summary and Future Work |
|
|
496 | (1) |
|
|
497 | (1) |
|
|
497 | (2) |
|
24 Summary and Future Work |
|
|
499 | (8) |
|
|
|
|
500 | (3) |
|
24.1.1 Game Theory for Cyber Deception |
|
|
500 | (1) |
|
24.1.2 Game Theory for Cyber Security |
|
|
500 | (2) |
|
24.1.3 Part 3: Adversarial Machine Learning for Cyber Security |
|
|
502 | (1) |
|
24.1.4 Part 4: Generative Models for Cyber Security |
|
|
502 | (1) |
|
24.1.5 Part 5: Reinforcement Learning for Cyber Security |
|
|
502 | (1) |
|
24.1.6 Other Machine Learning approach to Cyber Security |
|
|
503 | (1) |
|
|
503 | (2) |
|
24.2.1 Game Theory and Cyber Security |
|
|
503 | (1) |
|
24.2.2 Machine Learning and Cyber Security |
|
|
504 | (1) |
|
|
505 | (2) |
Index |
|
507 | |