Preface To The Fifth Edition |
|
xix | |
Preface To The Fourth Edition |
|
xxiii | |
Preface To The Third Edition |
|
xxv | |
|
|
1 | (18) |
|
1.1 Five Important Practical Problems, |
|
|
2 | (4) |
|
1.1.1 Forecasting Time Series, |
|
|
2 | (1) |
|
1.1.2 Estimation of Transfer Functions, |
|
|
3 | (1) |
|
1.1.3 Analysis of Effects of Unusual Intervention Events to a System, |
|
|
4 | (1) |
|
1.1.4 Analysis of Multivariate Time Series, |
|
|
4 | (1) |
|
1.1.5 Discrete Control Systems, |
|
|
5 | (1) |
|
1.2 Stochastic and Deterministic Dynamic Mathematical Models, |
|
|
6 | (8) |
|
1.2.1 Stationary and Nonstationary Stochastic Models for Forecasting and Control, |
|
|
7 | (4) |
|
1.2.2 Transfer Function Models, |
|
|
11 | (2) |
|
1.2.3 Models for Discrete Control Systems, |
|
|
13 | (1) |
|
1.3 Basic Ideas in Model Building, |
|
|
14 | (3) |
|
|
14 | (1) |
|
1.3.2 Iterative Stages in the Selection of a Model, |
|
|
15 | (2) |
|
Appendix A1.1 Use of the R Software, |
|
|
17 | (1) |
|
|
18 | (1) |
Part One Stochastic Models And Their Forecasting |
|
19 | (158) |
|
2 Autocorrelation Function and Spectrum of Stationary Processes |
|
|
21 | (26) |
|
2.1 Autocorrelation Properties of Stationary Models, |
|
|
21 | (13) |
|
2.1.1 Time Series and Stochastic Processes, |
|
|
21 | (3) |
|
2.1.2 Stationary Stochastic Processes, |
|
|
24 | (2) |
|
2.1.3 Positive Definiteness and the Autocovariance Matrix, |
|
|
26 | (3) |
|
2.1.4 Autocovariance and Autocorrelation Functions, |
|
|
29 | (1) |
|
2.1.5 Estimation of Autocovariance and Autocorrelation Functions, |
|
|
30 | (1) |
|
2.1.6 Standard Errors of Autocorrelation Estimates, |
|
|
31 | (3) |
|
2.2 Spectral Properties of Stationary Models, |
|
|
34 | (9) |
|
2.2.1 Periodogram of a Time Series, |
|
|
34 | (1) |
|
2.2.2 Analysis of Variance, |
|
|
35 | (1) |
|
2.2.3 Spectrum and Spectral Density Function, |
|
|
36 | (4) |
|
2.2.4 Simple Examples of Autocorrelation and Spectral Density Functions, |
|
|
40 | (3) |
|
2.2.5 Advantages and Disadvantages of the Autocorrelation and Spectral Density Functions, |
|
|
43 | (1) |
|
Appendix A2.1 Link Between the Sample Spectrum and Autocovariance Function Estimate, |
|
|
43 | (1) |
|
|
44 | (3) |
|
3 Linear Stationary Models |
|
|
47 | (41) |
|
3.1 General Linear Process, |
|
|
47 | (7) |
|
3.1.1 Two Equivalent Forms for the Linear Process, |
|
|
47 | (3) |
|
3.1.2 Autocovariance Generating Function of a Linear Process, |
|
|
50 | (1) |
|
3.1.3 Stationarity and Invertibility Conditions for a Linear Process, |
|
|
51 | (1) |
|
3.1.4 Autoregressive and Moving Average Processes, |
|
|
52 | (2) |
|
3.2 Autoregressive Processes, |
|
|
54 | (14) |
|
3.2.1 Stationarity Conditions for Autoregressive Processes, |
|
|
54 | (2) |
|
3.2.2 Autocorrelation Function and Spectrum of Autoregressive Processes, |
|
|
56 | (2) |
|
3.2.3 The First-Order Autoregressive Process, |
|
|
58 | (1) |
|
3.2.4 Second-Order Autoregressive Process, |
|
|
59 | (5) |
|
3.2.5 Partial Autocorrelation Function, |
|
|
64 | (2) |
|
3.2.6 Estimation of the Partial Autocorrelation Function, |
|
|
66 | (1) |
|
3.2.7 Standard Errors of Partial Autocorrelation Estimates, |
|
|
66 | (1) |
|
|
67 | (1) |
|
3.3 Moving Average Processes, |
|
|
68 | (7) |
|
3.3.1 Invertibility Conditions for Moving Average Processes, |
|
|
68 | (1) |
|
3.3.2 Autocorrelation Function and Spectrum of Moving Average Processes, |
|
|
69 | (1) |
|
3.3.3 First-Order Moving Average Process, |
|
|
70 | (1) |
|
3.3.4 Second-Order Moving Average Process, |
|
|
71 | (4) |
|
3.3.5 Duality Between Autoregressive and Moving Average Processes, |
|
|
75 | (1) |
|
3.4 Mixed Autoregressive—Moving Average Processes, |
|
|
75 | (7) |
|
3.4.1 Stationarity and Invertibility Properties, |
|
|
75 | (2) |
|
3.4.2 Autocorrelation Function and Spectrum of Mixed Processes, |
|
|
77 | (1) |
|
3.4.3 First Order Autoregressive First-Order Moving Average Process, |
|
|
78 | (3) |
|
|
81 | (1) |
|
Appendix A3.1 Autocovariances, Autocovariance Generating Function, and Stationarity Conditions for a General Linear Process, |
|
|
82 | (2) |
|
Appendix A3.2 Recursive Method for Calculating Estimates of Autoregressive Parameters, |
|
|
84 | (2) |
|
|
86 | (2) |
|
4 Linear Nonstationary Models , |
|
|
88 | (41) |
|
4.1 Autoregressive Integrated Moving Average Processes, |
|
|
88 | (9) |
|
4.1.1 Nonstationary First-Order Autoregressive Process, |
|
|
88 | (2) |
|
4.1.2 General Model for a Nonstationary Process Exhibiting Homogeneity, |
|
|
90 | (4) |
|
4.1.3 General Form of the ARIMA Model, |
|
|
94 | (3) |
|
4.2 Three Explicit Forms for the ARIMA Model, |
|
|
97 | (9) |
|
4.2.1 Difference Equation Form of the Model, |
|
|
97 | (1) |
|
4.2.2 Random Shock Form of the Model, |
|
|
98 | (5) |
|
4.2.3 Inverted Form of the Model, |
|
|
103 | (3) |
|
4.3 Integrated Moving Average Processes, |
|
|
106 | (10) |
|
4.3.1 Integrated Moving Average Process of Order (0, 1, 1), |
|
|
107 | (3) |
|
4.3.2 Integrated Moving Average Process of Order (0, 2, 2), |
|
|
110 | (4) |
|
4.3.3 General Integrated Moving Average Process of Order (0, d, q), |
|
|
114 | (2) |
|
Appendix A4.1 Linear Difference Equations, |
|
|
116 | (5) |
|
Appendix A4.2 IMA(0, 1, 1) Process with Deterministic Drift, |
|
|
121 | (1) |
|
Appendix A4.3 ARIMA Processes with Added Noise, |
|
|
122 | (4) |
|
A4.3.1 Sum of Two Independent Moving Average Processes, |
|
|
122 | (1) |
|
A4.3.2 Effect of Added Noise on the General Model, |
|
|
123 | (1) |
|
A4.3.3 Example for an IMA(0, 1, 1) Process with Added White Noise, |
|
|
124 | (1) |
|
A4.3.4 Relation between the IMA(0, 1, 1) Process and a Random Walk, |
|
|
125 | (1) |
|
A4.3.5 Autocovariance Function of the General Model with Added Correlated Noise, |
|
|
125 | (1) |
|
|
126 | (3) |
|
|
129 | (48) |
|
5.1 Minimum Mean Square Error Forecasts and Their Properties, |
|
|
129 | (6) |
|
5.1.1 Derivation of the Minimum Mean Square Error Forecasts, |
|
|
131 | (1) |
|
5.1.2 Three Basic Forms for the Forecast, |
|
|
132 | (3) |
|
5.2 Calculating Forecasts and Probability Limits, |
|
|
135 | (4) |
|
5.2.1 Calculation of ψ Weights, |
|
|
135 | (1) |
|
5.2.2 Use of the ψ Weights in Updating the Forecasts, |
|
|
136 | (1) |
|
5.2.3 Calculation of the Probability Limits at Different Lead Times, |
|
|
137 | (1) |
|
5.2.4 Calculation of Forecasts Using R, |
|
|
138 | (1) |
|
5.3 Forecast Function and Forecast Weights, |
|
|
139 | (5) |
|
5.3.1 Eventual Forecast Function Determined by the Autoregressive Operator, |
|
|
140 | (1) |
|
5.3.2 Role of the Moving Average Operator in Fixing the Initial Values, |
|
|
140 | (2) |
|
5.3.3 Lead 1 Forecast Weights, |
|
|
142 | (2) |
|
5.4 Examples of Forecast Functions and Their Updating, |
|
|
144 | (11) |
|
5.4.1 Forecasting an IMA(0, 1, 1) Process, |
|
|
144 | (3) |
|
5.4.2 Forecasting an IMA(0, 2, 2) Process, |
|
|
147 | (2) |
|
5.4.3 Forecasting a General IMA(0, d, q) Process, |
|
|
149 | (1) |
|
5.4.4 Forecasting Autoregressive Processes, |
|
|
150 | (3) |
|
5.4.5 Forecasting a (1, 0, 1) Process, |
|
|
153 | (1) |
|
5.4.6 Forecasting a (1, 1, 1) Process, |
|
|
154 | (1) |
|
5.5 Use of State-Space Model Formulation for Exact Forecasting, |
|
|
155 | (7) |
|
5.5.1 State-Space Model Representation for the ARIMA Process, |
|
|
155 | (2) |
|
5.5.2 Kalman Filtering Relations for Use in Prediction, |
|
|
157 | (3) |
|
5.5.3 Smoothing Relations in the State Variable Model, |
|
|
160 | (2) |
|
|
162 | (2) |
|
Appendix A5.1 Correlation Between Forecast Errors, |
|
|
164 | (2) |
|
A5.1.1 Autocorrelation Function of Forecast Errors at Different Origins, |
|
|
164 | (1) |
|
A5.1.2 Correlation Between Forecast Errors at the Same Origin with Different Lead Times, |
|
|
165 | (1) |
|
Appendix A5.2 Forecast Weights for any Lead Time, |
|
|
166 | (2) |
|
Appendix A5.3 Forecasting in Terms of the General Integrated Form, |
|
|
168 | (6) |
|
A5.3.1 General Method of Obtaining the Integrated Form, |
|
|
168 | (2) |
|
A5.3.2 Updating the General Integrated Form, |
|
|
170 | (1) |
|
A5.3.3 Comparison with the Discounted Least-Squares Method, |
|
|
171 | (3) |
|
|
174 | (3) |
Part Two Stochastic Model Building |
|
177 | (218) |
|
|
179 | (30) |
|
6.1 Objectives of Identification, |
|
|
179 | (1) |
|
6.1.1 Stages in the Identification Procedure, |
|
|
180 | (1) |
|
6.2 Identification-Techniques, |
|
|
180 | (14) |
|
6.2.1 Use of the Autocorrelation and Partial Autocorrelation Functions in Identification, |
|
|
180 | (3) |
|
6.2.2 Standard Errors for Estimated Autocorrelations and Partial Autocorrelations, |
|
|
183 | (2) |
|
6.2.3 Identification of Models for Some Actual Time Series, |
|
|
185 | (5) |
|
6.2.4 Some Additional Model Identification Tools, |
|
|
190 | (4) |
|
6.3 Initial Estimates for the Parameters, |
|
|
194 | (8) |
|
6.3.1 Uniqueness of Estimates Obtained from the Autocovariance Function, |
|
|
194 | (1) |
|
6.3.2 Initial Estimates for Moving Average Processes, |
|
|
194 | (2) |
|
6.3.3 Initial Estimates for Autoregressive Processes, |
|
|
196 | (1) |
|
6.3.4 Initial Estimates for Mixed Autoregressive—Moving Average Processes, |
|
|
197 | (1) |
|
6.3.5 Initial Estimate of Error Variance, |
|
|
198 | (1) |
|
6.3.6 Approximate Standard Error for /7), |
|
|
199 | (1) |
|
6.3.7 Choice Between Stationary and Nonstationary Models in Doubtful Cases, |
|
|
200 | (2) |
|
|
202 | (4) |
|
6.4.1 Multiplicity of Autoregressive—Moving Average Models, |
|
|
202 | (2) |
|
6.4.2 Multiple Moment Solutions for Moving Average Parameters, |
|
|
204 | (1) |
|
6.4.3 Use of the Backward Process to Determine Starting Values, |
|
|
205 | (1) |
|
Appendix A6.1 Expected Behavior of the Estimated Autocorrelation Function for a Nonstationary Process, |
|
|
206 | (1) |
|
|
207 | (2) |
|
|
209 | (75) |
|
7.1 Study of the Likelihood and Sum-of-Squares Functions, |
|
|
209 | (17) |
|
7.1.1 Likelihood Function, |
|
|
209 | (1) |
|
7.1.2 Conditional Likelihood for an ARIMA Process, |
|
|
210 | (1) |
|
7.1.3 Choice of Starting Values for Conditional Calculation, |
|
|
211 | (2) |
|
7.1.4 Unconditional Likelihood, Sum-of-Squares Function, and Least-Squares Estimates, |
|
|
213 | (3) |
|
7.1.5 General Procedure for Calculating the Unconditional Sum of Squares, |
|
|
216 | (2) |
|
7.1.6 Graphical Study of the Sum-of-Squares Function, |
|
|
218 | (2) |
|
7.1.7 Examination of the Likelihood Function and Confidence Regions, |
|
|
220 | (6) |
|
7.2 Nonlinear Estimation, |
|
|
226 | (10) |
|
7.2.1 General Method of Approach, |
|
|
226 | (1) |
|
7.2.2 Numerical Estimates of the Derivatives, |
|
|
227 | (1) |
|
7.2.3 Direct Evaluation of the Derivatives, |
|
|
228 | (1) |
|
7.2.4 General Least-Squares Algorithm for the Conditional Model, |
|
|
229 | (2) |
|
7.2.5 ARIMA Models Fitted to Series A—F, |
|
|
231 | (2) |
|
7.2.6 Large-Sample Information Matrices and Covariance Estimates, |
|
|
233 | (3) |
|
7.3 Some Estimation Results for Specific Models, |
|
|
236 | (6) |
|
7.3.1 Autoregressive Processes, |
|
|
236 | (2) |
|
7.3.2 Moving Average Processes, |
|
|
238 | (1) |
|
|
238 | (1) |
|
7.3.4 Separation of Linear and Nonlinear Components in Estimation, |
|
|
239 | (1) |
|
7.3.5 Parameter Redundancy, |
|
|
240 | (2) |
|
7.4 Likelihood Function Based on the State-Space Model, |
|
|
242 | (3) |
|
7.5 Estimation Using Bayes' Theorem, |
|
|
245 | (6) |
|
|
245 | (1) |
|
7.5.2 Bayesian Estimation of Parameters, |
|
|
246 | (1) |
|
7.5.3 Autoregressive Processes, |
|
|
247 | (2) |
|
7.5.4 Moving Average Processes, |
|
|
249 | (1) |
|
|
250 | (1) |
|
Appendix A7.1 Review of Normal Distribution Theory, |
|
|
251 | (5) |
|
A7.1.1 Partitioning of a Positive-Definite Quadratic Form, |
|
|
251 | (1) |
|
A7.1.2 Two Useful Integrals, |
|
|
252 | (1) |
|
A7.1.3 Normal Distribution, |
|
|
253 | (2) |
|
A7.1.4 Student's t Distribution, |
|
|
255 | (1) |
|
Appendix A7.2 Review of Linear Least-Squares Theory, |
|
|
256 | (3) |
|
A7.2.1 Normal Equations and Least Squares, |
|
|
256 | (1) |
|
A7.2.2 Estimation of Error Variance, |
|
|
257 | (1) |
|
A7.2.3 Covariance Matrix of Least-Squares Estimates, |
|
|
257 | (1) |
|
A7.2.4 Confidence Regions, |
|
|
257 | (1) |
|
A7.2.5 Correlated Errors, |
|
|
258 | (1) |
|
Appendix A7.3 Exact Likelihood Function for Moving Average and Mixed Processes, |
|
|
259 | (7) |
|
Appendix A7.4 Exact Likelihood Function for an Autoregressive Process, |
|
|
266 | (8) |
|
Appendix A7.5 Asymptotic Distribution of Estimators for Autoregressive Models, |
|
|
274 | (3) |
|
Appendix A7.6 Examples of the Effect of Parameter Estimation Errors on Variances of Forecast Errors and Probability Limits for Forecasts, |
|
|
277 | (3) |
|
Appendix A7.7 Special Note on Estimation of Moving Average Parameters, |
|
|
280 | (1) |
|
|
280 | (4) |
|
8 Model Diagnostic Checking |
|
|
284 | (21) |
|
8.1 Checking the Stochastic Model, |
|
|
284 | (3) |
|
8.1.1 General Philosophy, |
|
|
284 | (1) |
|
|
285 | (2) |
|
8.2 Diagnostic Checks Applied to Residuals, |
|
|
287 | (14) |
|
8.2.1 Autocorrelation Check, |
|
|
287 | (2) |
|
8.2.2 Portmanteau Lack-of-Fit Test, |
|
|
289 | (5) |
|
8.2.3 Model Inadequacy Arising from Changes in Parameter Values, |
|
|
294 | (1) |
|
8.2.4 Score Tests for Model Checking, |
|
|
295 | (2) |
|
8.2.5 Cumulative Periodogram Check, |
|
|
297 | (4) |
|
8.3 Use of Residuals to Modify the Model, |
|
|
301 | (2) |
|
8.3.1 Nature of the Correlations in the Residuals When an Incorrect Model Is Used, |
|
|
301 | (1) |
|
8.3.2 Use of Residuals to Modify the Model, |
|
|
302 | (1) |
|
|
303 | (2) |
|
9 Analysis of Seasonal Time Series |
|
|
305 | (47) |
|
9.1 Parsimonious Models for Seasonal Time Series, |
|
|
305 | (5) |
|
9.1.1 Fitting Versus Forecasting, |
|
|
306 | (1) |
|
9.1.2 Seasonal Models Involving Adaptive Sines and Cosines, |
|
|
307 | (1) |
|
9.1.3 General Multiplicative Seasonal Model, |
|
|
308 | (2) |
|
9.2 Representation of the Airline Data by a Multiplicative (0, 1, 1) x (0, 1, 1)12 Model, |
|
|
310 | (15) |
|
9.2.1 Multiplicative (0, 1, 1) X (0, 1 , 1)12 Model, |
|
|
310 | (1) |
|
|
311 | (7) |
|
9.2.3 Model Identification, |
|
|
318 | (2) |
|
9.2.4 Parameter Estimation, |
|
|
320 | (4) |
|
9.2.5 Diagnostic Checking, |
|
|
324 | (1) |
|
9.3 Some Aspects of More General Seasonal ARIMA Models, |
|
|
325 | (6) |
|
9.3.1 Multiplicative and Nonmultiplicative Models, |
|
|
325 | (2) |
|
9.3.2 Model Identification, |
|
|
327 | (1) |
|
9.3.3 Parameter Estimation, |
|
|
328 | (1) |
|
9.3.4 Eventual Forecast Functions for Various Seasonal Models, |
|
|
329 | (2) |
|
9.3.5 Choice of Transformation, |
|
|
331 | (1) |
|
9.4 Structural Component Models and Deterministic Seasonal Components, |
|
|
331 | (8) |
|
9.4.1 Structural Component Time Series Models, |
|
|
332 | (3) |
|
9.4.2 Deterministic Seasonal and Trend Components and Common Factors, |
|
|
335 | (1) |
|
9.4.3 Estimation of Unobserved Components in Structural Models, |
|
|
336 | (3) |
|
9.5 Regression Models with Time Series Error Terms, |
|
|
339 | (6) |
|
9.5.1 Model Building, Estimation, and Forecasting Procedures for Regression Models, |
|
|
340 | (4) |
|
9.5.2 Restricted Maximum Likelihood Estimation for Regression Models, |
|
|
344 | (1) |
|
Appendix A9.1 Autocovariances for Some Seasonal Models, |
|
|
345 | (4) |
|
|
349 | (3) |
|
10 Additional Topics and Extensions |
|
|
352 | (43) |
|
10.1 Tests for Unit Roots in ARIMA Models, |
|
|
353 | (8) |
|
10.1.1 Tests for Unit Roots in AR Models, |
|
|
353 | (5) |
|
10.1.2 Extensions of Unit Root Testing to Mixed ARIMA Models, |
|
|
358 | (3) |
|
10.2 Conditional Heteroscedastic Models, |
|
|
361 | (16) |
|
|
362 | (4) |
|
|
366 | (1) |
|
10.2.3 Model Building and Parameter Estimation, |
|
|
367 | (3) |
|
10.2.4 An Illustrative Example: Weekly S&P 500 Log Returns, |
|
|
370 | (2) |
|
10.2.5 Extensions of the ARCH and GARCH Models, |
|
|
372 | (5) |
|
10.2.6 Stochastic Volatility Models, |
|
|
377 | (1) |
|
10.3 Nonlinear Time Series Models, |
|
|
377 | (8) |
|
10.3.1 Classes of Nonlinear Models, |
|
|
378 | (3) |
|
10.3.2 Detection of Nonlinearity, |
|
|
381 | (1) |
|
10.3.3 An Empirical Example, |
|
|
382 | (3) |
|
10.4 Long Memory Time Series Processes, |
|
|
385 | (7) |
|
10.4.1 Fractionally Integrated Processes, |
|
|
385 | (4) |
|
10.4.2 Estimation of Parameters, |
|
|
389 | (3) |
|
|
392 | (3) |
Part Three Transfer Function And Multivariate Model Building |
|
395 | (164) |
|
11 Transfer Function Models |
|
|
397 | (31) |
|
11.1 Linear Transfer Function Models, |
|
|
397 | (7) |
|
11.1.1 Discrete Transfer Function, |
|
|
398 | (2) |
|
11.1.2 Continuous Dynamic Models Represented by Differential Equations, |
|
|
400 | (4) |
|
11.2 Discrete Dynamic Models Represented by Difference Equations, |
|
|
404 | (10) |
|
11.2.1 General Form of the Difference Equation, |
|
|
404 | (2) |
|
11.2.2 Nature of the Transfer Function, |
|
|
406 | (1) |
|
11.2.3 First- and Second-Order Discrete Transfer Function Models, |
|
|
407 | (5) |
|
11.2.4 Recursive Computation of Output for Any Input, |
|
|
412 | (1) |
|
11.2.5 Transfer Function Models with Added Noise, |
|
|
413 | (1) |
|
11.3 Relation Between Discrete and Continuous Models, |
|
|
414 | (6) |
|
11.3.1 Response to a Pulsed Input, |
|
|
415 | (2) |
|
11.3.2 Relationships for First- and Second-Order Coincident Systems, |
|
|
417 | (2) |
|
11.3.3 Approximating General Continuous Models by Discrete Models, |
|
|
419 | (1) |
|
Appendix A11.1 Continuous Models with Pulsed Inputs, |
|
|
420 | (4) |
|
Appendix A11.2 Nonlinear Transfer Functions and Linearization, |
|
|
424 | (2) |
|
|
426 | (2) |
|
12 Identification, Fitting, and Checking of Transfer Function Models |
|
|
428 | (53) |
|
12.1 Cross-Correlation Function, |
|
|
429 | (6) |
|
12.1.1 Properties of the Cross-Covariance and Cross-Correlation Functions, |
|
|
429 | (2) |
|
12.1.2 Estimation of the Cross-Covariance and Cross-Correlation Functions, |
|
|
431 | (2) |
|
12.1.3 Approximate Standard Errors of Cross-Correlation Estimates, |
|
|
433 | (2) |
|
12.2 Identification of Transfer Function Models, |
|
|
435 | (11) |
|
12.2.1 Identification of Transfer Function Models by Prewhitening the Input, |
|
|
437 | (1) |
|
12.2.2 Example of the Identification of a Transfer Function Model, |
|
|
438 | (4) |
|
12.2.3 Identification of the Noise Model, |
|
|
442 | (2) |
|
12.2.4 Some General Considerations in Identifying Transfer Function Models, |
|
|
444 | (2) |
|
12.3 Fitting and Checking Transfer Function Models, |
|
|
446 | (7) |
|
12.3.1 Conditional Sum-of-Squares Function, |
|
|
446 | (1) |
|
12.3.2 Nonlinear Estimation, |
|
|
447 | (2) |
|
12.3.3 Use of Residuals for Diagnostic Checking, |
|
|
449 | (1) |
|
12.3.4 Specific Checks Applied to the Residuals, |
|
|
450 | (3) |
|
12.4 Some Examples of Fitting and Checking Transfer Function Models, |
|
|
453 | (8) |
|
12.4.1 Fitting and Checking of the (las Furnace Model, |
|
|
453 | (5) |
|
12.4.2 Simulated Example with Two Inputs, |
|
|
458 | (3) |
|
12.5 Forecasting with Transfer Function Models Using Leading Indicators, |
|
|
461 | (8) |
|
12.5.1 Minimum Mean Square Error Forecast, |
|
|
461 | (4) |
|
12.5.2 Forecast of CO2 Output from Gas Furnace, |
|
|
465 | (3) |
|
12.5.3 Forecast of Nonstationary Sales Data Using a Leading Indicator, |
|
|
468 | (1) |
|
12.6 Some Aspects of the Design of Experiments to Estimate Transfer Functions, |
|
|
469 | (2) |
|
Appendix A12.1 Use of Cross-Spectral Analysis for Transfer Function Model Identification, |
|
|
471 | (2) |
|
A12.1.1 Identification of Single-Input Transfer Function Models, |
|
|
471 | (1) |
|
A12.1.2 Identification of Multiple-Input Transfer Function Models, |
|
|
472 | (1) |
|
Appendix A12.2 Choice of Input to Provide Optimal Parameter Estimates, |
|
|
473 | (4) |
|
A12.2.1 Design of Optimal Inputs for a Simple System, |
|
|
473 | |
|
A12.2.2 Numerical Example, |
|
|
416 | (61) |
|
|
477 | (4) |
|
13 Intervention Analysis, Outlier Detection, and Missing Values |
|
|
481 | (24) |
|
13.1 Intervention Analysis Methods, |
|
|
481 | (7) |
|
13.1.1 Models for Intervention Analysis, |
|
|
481 | (3) |
|
13.1.2 Example of Intervention Analysis, |
|
|
484 | (1) |
|
13.1.3 Nature of the MLE for a Simple Level Change Parameter Model, |
|
|
485 | (3) |
|
13.2 Outlier Analysis for Time Series, |
|
|
488 | (7) |
|
13.2.1 Models for Additive and Innovational Outliers, |
|
|
488 | (1) |
|
13.2.2 Estimation of Outlier Effect for Known Timing of the Outlier, |
|
|
489 | (2) |
|
13.2.3 Iterative Procedure for Outlier Detection, |
|
|
491 | (1) |
|
13.2.4 Examples of Analysis of Outliers, |
|
|
492 | (3) |
|
13.3 Estimation for ARMA Models with Missing Values, |
|
|
495 | (7) |
|
13.3.1 State-Space Model and Kalman Filter with Missing Values, |
|
|
496 | (2) |
|
13.3.2 Estimation of Missing Values of an ARMA Process, |
|
|
498 | (4) |
|
|
502 | (3) |
|
14 Multivariate Time Series Analysis |
|
|
505 | (54) |
|
14.1 Stationary Multivariate Time Series, |
|
|
506 | (3) |
|
14.1.1 Cross-Covariance and Cross-Correlation Matrices, |
|
|
506 | (1) |
|
14.1.2 Covariance Stationarity, |
|
|
507 | (1) |
|
14.1.3 Vector White Noise Process, |
|
|
507 | (1) |
|
14.1.4 Moving Average Representation of a Stationary Vector Process, |
|
|
508 | (1) |
|
14.2 Vector Autoregressive Models, |
|
|
509 | (15) |
|
|
509 | (1) |
|
14.2.2 Moment Equations and Yule—Walker Estimates, |
|
|
510 | (1) |
|
14.2.3 Special Case: VAR(1) Model, |
|
|
511 | (2) |
|
14.2.4 Numerical Example, |
|
|
513 | (2) |
|
14.2.5 Initial Model Building and Least-Squares Estimation for VAR Models, |
|
|
515 | (3) |
|
14.2.6 Parameter Estimation and Model Checking, |
|
|
518 | (1) |
|
14.2.7 An Empirical Example, |
|
|
519 | (5) |
|
14.3 Vector Moving Average Models, |
|
|
524 | (3) |
|
14.3.1 Vector MA(q) Model, |
|
|
524 | (1) |
|
14.3.2 Special Case: Vector MA(1) Model, |
|
|
525 | (1) |
|
14.3.3 Numerical Example, |
|
|
525 | (1) |
|
14.3.4 Model Building for Vector MA Models, |
|
|
526 | (1) |
|
14.4 Vector Autoregressive—Moving Average Models, |
|
|
527 | (7) |
|
14.4.1 Stationarity and Invertibility Conditions, |
|
|
527 | (1) |
|
14.4.2 Covariance Matrix Properties of VARMA Processes, |
|
|
528 | (1) |
|
14.4.3 Nonuniqueness and Parameter Identifiability for VARMA Models, |
|
|
528 | (1) |
|
14.4.4 Model Specification for VARMA Processes, |
|
|
529 | (3) |
|
14.4.5 Estimation and Model Checking for VARMA Models, |
|
|
532 | (1) |
|
14.4.6 Relation of VARMA Models to Transfer Function and ARMAX Models, |
|
|
533 | (1) |
|
14.5 Forecasting for Vector Autoregressive—Moving Average Processes, |
|
|
534 | (2) |
|
14.5.1 Calculation of Forecasts from ARMA Difference Equation, |
|
|
534 | (2) |
|
14.5.2 Forecasts from Infinite VMA Form and Properties of Forecast Errors, |
|
|
536 | (1) |
|
14.6 State-Space Form of the VARMA Model, |
|
|
536 | (3) |
|
14.7 Further Discussion of VARMA Model Specification, |
|
|
539 | (7) |
|
14.7.1 Kronecker Structure for VARMA Models, |
|
|
539 | (4) |
|
14.7.2 An Empirical Example, |
|
|
543 | (2) |
|
14.7.3 Partial Canonical Correlation Analysis for Reduced-Rank Structure, |
|
|
545 | (1) |
|
14.8 Nonstationarity and Cointegration, |
|
|
546 | (6) |
|
14.8.1 Vector ARIMA Models, |
|
|
546 | (1) |
|
14.8.2 Cointegration in Nonstationary Vector Processes, |
|
|
547 | (2) |
|
14.8.3 Estimation and Inferences for Cointegrated VAR Models, |
|
|
549 | (3) |
|
Appendix A14.1 Spectral Characteristics and Linear Filtering Relations for Stationary Multivariate Processes, |
|
|
552 | (2) |
|
A14.1.1 Spectral Characteristics for Stationary Multivariate Processes, |
|
|
552 | (1) |
|
A14.1.2 Linear Filtering Relations for Stationary Multivariate Processes, |
|
|
553 | (1) |
|
|
554 | (5) |
Part Four Design Of Discrete Control Schemes |
|
559 | (58) |
|
|
561 | (56) |
|
15.1 Process Monitoring and Process Adjustment, |
|
|
562 | (4) |
|
15.1.1 Process Monitoring, |
|
|
562 | (2) |
|
15.1.2 Process Adjustment, |
|
|
564 | (2) |
|
15.2 Process Adjustment Using Feedback Control, |
|
|
566 | (14) |
|
15.2.1 Feedback Adjustment Chart, |
|
|
567 | (2) |
|
15.2.2 Modeling the Feedback Loop, |
|
|
569 | (1) |
|
15.2.3 Simple Models for Disturbances and Dynamics, |
|
|
570 | (3) |
|
15.2.4 General Minimum Mean Square Error Feedback Control Schemes, |
|
|
573 | (2) |
|
15.2.5 Manual Adjustment for Discrete Proportional—Integral Schemes, |
|
|
575 | (3) |
|
15.2.6 Complementary Roles of Monitoring and Adjustment, |
|
|
578 | (2) |
|
15.3 Excessive Adjustment Sometimes Required by MMSE Control, |
|
|
580 | (2) |
|
15.3.1 Constrained Control, |
|
|
581 | (1) |
|
15.4 Minimum Cost Control with Fixed Costs of Adjustment and Monitoring, |
|
|
582 | (6) |
|
15.4.1 Bounded Adjustment Scheme for Fixed Adjustment Cost, |
|
|
583 | (1) |
|
15.4.2 Indirect Approach for Obtaining a Bounded Adjustment Scheme, |
|
|
584 | (1) |
|
15.4.3 Inclusion of the Cost of Monitoring, |
|
|
585 | (3) |
|
15.5 Feedforward Control, |
|
|
588 | (11) |
|
15.5.1 Feedforward Control to Minimize Mean Square Error at the Output, |
|
|
588 | (3) |
|
15.5.2 An Example: Control of the Specific Gravity of an Intermediate Product, |
|
|
591 | (2) |
|
15.5.3 Feedforward Control with Multiple Inputs, |
|
|
593 | (1) |
|
15.5.4 Feedforward—Feedback Control, |
|
|
594 | (2) |
|
15.5.5 Advantages and Disadvantages of Feedforward and Feedback Control, |
|
|
596 | (1) |
|
15.5.6 Remarks on Fitting Transfer Function—Noise Models Using Operating Data, |
|
|
597 | (2) |
|
15.6 Monitoring Values of Parameters of Forecasting and Feedback Adjustment Schemes, |
|
|
599 | (1) |
|
Appendix A15.1 Feedback Control Schemes Where the Adjustment Variance Is Restricted, |
|
|
600 | (9) |
|
A15.1.1 Derivation of Optimal Adjustment, |
|
|
601 | (2) |
|
A15.1.2 Case Where S Is Negligible, |
|
|
603 | (6) |
|
Appendix A15.2 Choice of the Sampling Interval, |
|
|
609 | (4) |
|
A15.2.1 Illustration of the Effect of Reducing Sampling Frequency, |
|
|
610 | (1) |
|
A15.2.2 Sampling an IMA(0, 1, 1) Process, |
|
|
610 | (3) |
|
|
613 | (4) |
Part Five Charts And Tables |
|
617 | (25) |
|
Collection Of Tables And Charts |
|
|
619 | (6) |
|
Collection Of Time Series Used For Examples In The Text And In Exercises |
|
|
625 | (17) |
References |
|
642 | (17) |
Index |
|
659 | |