Muutke küpsiste eelistusi

Modelling Nonlinear Economic Time Series [Kõva köide]

(, Professor, Department of Mathematics, University of Bergen, Norway), , (, Professor of Economics, CREATES, Aarhus University, Denmark)
  • Formaat: Hardback, 586 pages, kõrgus x laius x paksus: 241x164x35 mm, kaal: 982 g, Numerous figures and tables
  • Sari: Advanced Texts in Econometrics
  • Ilmumisaeg: 16-Dec-2010
  • Kirjastus: Oxford University Press
  • ISBN-10: 0199587140
  • ISBN-13: 9780199587148
Teised raamatud teemal:
  • Formaat: Hardback, 586 pages, kõrgus x laius x paksus: 241x164x35 mm, kaal: 982 g, Numerous figures and tables
  • Sari: Advanced Texts in Econometrics
  • Ilmumisaeg: 16-Dec-2010
  • Kirjastus: Oxford University Press
  • ISBN-10: 0199587140
  • ISBN-13: 9780199587148
Teised raamatud teemal:
This book contains an extensive up-to-date overview of nonlinear time series models and their application to modelling economic relationships. It considers nonlinear models in stationary and nonstationary frameworks, and both parametric and nonparametric models are discussed. The book contains examples of nonlinear models in economic theory and presents the most common nonlinear time series models. Importantly, it shows the reader how to apply these models in practice. For this purpose, the building of various nonlinear models with its three stages of model building: specification, estimation and evaluation, is discussed in detail and is illustrated by several examples involving both economic and non-economic data. Since estimation of nonlinear time series models is carried out using numerical algorithms, the book contains a chapter on estimating parametric nonlinear models and another on estimating nonparametric ones.

Forecasting is a major reason for building time series models, linear or nonlinear. The book contains a discussion on forecasting with nonlinear models, both parametric and nonparametric, and considers numerical techniques necessary for computing multi-period forecasts from them. The main focus of the book is on models of the conditional mean, but models of the conditional variance, mainly those of autoregressive conditional heteroskedasticity, receive attention as well. A separate chapter is devoted to state space models. As a whole, the book is an indispensable tool for researchers interested in nonlinear time series and is also suitable for teaching courses in econometrics and time series analysis.
List of Figures
xx
List of Tables
xxiii
Acronyms and abbreviations xxvi
1 Concepts, models, and definitions
1(15)
1.1 Defining nonlinearity
1(1)
1.2 Where does nonlinearity come from?
2(1)
1.3 Stationarity and nonstationarity
3(3)
1.4 Invertibility
6(1)
1.5 Trends
7(3)
1.6 Seasonality
10(1)
1.7 Conditional distributions
10(1)
1.8 Wold's representation and Volterra expansion
11(1)
1.9 Additive models
12(1)
1.10 Spectral analysis
13(1)
1.11 Chaos
14(2)
2 Nonlinear models in economic theory
16(12)
2.1 Disequilibrium models
16(2)
2.2 Labour market models
18(4)
2.2.1 Theory
18(2)
2.2.2 Practice
20(2)
2.3 Exchange rates in a target zone
22(3)
2.3.1 Theory
22(2)
2.3.2 Practice
24(1)
2.4 Production theory
25(3)
3 Parametric nonlinear models
28(24)
3.1 General considerations
28(4)
3.2 Switching regression models
32(3)
3.2.1 Standard switching regression model
32(2)
3.2.2 Vector threshold autoregressive model
34(1)
3.3 Markov-switching regression models
35(2)
3.4 Smooth transition regression models
37(4)
3.4.1 Standard smooth transition regression model
37(3)
3.4.2 Additive, multiple, and time-varying STR models
40(1)
3.4.3 Vector smooth transition autoregressive model
41(1)
3.5 Polynomial models
41(2)
3.6 Artificial neural network models
43(2)
3.7 Min-max models
45(1)
3.8 Nonlinear moving average models
46(1)
3.9 Bilinear models
47(1)
3.10 Time-varying parameters and state space models
48(2)
3.11 Random coefficient and volatility models
50(2)
4 The nonparametric approach
52(13)
4.1 Introduction
52(1)
4.2 Autocovariance and spectrum
53(2)
4.3 Density, conditional mean, and conditional variance
55(2)
4.3.1 Non-Gaussian marginals
55(1)
4.3.2 Conditional quantities
56(1)
4.4 Dependence measures for nonlinear processes
57(8)
4.4.1 Local measures of dependence
58(2)
4.4.2 Global measures of dependence
60(1)
4.4.3 Measures based on density and distribution functions
61(1)
4.4.4 The copula
62(3)
5 Testing linearity against parametric alternatives
65(27)
5.1 Introduction
65(1)
5.2 Consistent misspecification tests
66(2)
5.3 Lagrange multiplier or score test
68(4)
5.3.1 Standard case
68(2)
5.3.2 Test in stages and a heteroskedasticity-robust version
70(1)
5.3.3 Robustifying against conditional heteroskedasticity
71(1)
5.4 Locally equivalent alternatives
72(1)
5.5 Nonlinear model only identified under the alternative
73(10)
5.5.1 Identification problem
73(1)
5.5.2 General solution
74(3)
5.5.3 Lagrange multiplier-type tests
77(3)
5.5.4 Monte Carlo tests
80(2)
5.5.5 Giving values to the nuisance parameters
82(1)
5.6 Testing linearity against unspecified alternatives
83(2)
5.6.1 Regression Specification Error Test
83(1)
5.6.2 Tests based on expansions
84(1)
5.7 Comparing parametric linearity tests using asymptotic relative efficiency
85(5)
5.7.1 Definition
85(3)
5.7.2 An example
88(2)
5.8 Which test to use?
90(2)
6 Testing parameter constancy
92(21)
6.1 General considerations
92(1)
6.2 Generalizing the Chow test
93(4)
6.2.1 Testing against a single break
93(2)
6.2.2 Testing against multiple breaks
95(2)
6.3 Lagrange multiplier type tests
97(8)
6.3.1 Testing a stationary single-equation model
97(3)
6.3.2 Testing a stationary vector autoregressive model
100(2)
6.3.3 Testing a nonstationary vector autoregressive model
102(3)
6.4 Tests based on recursive estimation of parameters
105(8)
6.4.1 Cumulative sum tests
105(2)
6.4.2 Moving sum tests
107(1)
6.4.3 Fluctuation tests
108(1)
6.4.4 Tests against stochastic parameters
109(2)
6.4.5 Testing the constancy of cointegrating relationships
111(2)
7 Nonparametric specification tests
113(49)
7.1 Introduction
113(1)
7.2 Nonparametric linearity tests
114(9)
7.2.1 Nonparametric tests: the spectral domain
115(1)
7.2.2 Testing linearity in the conditional mean and conditional variance
116(3)
7.2.3 Estimation
119(1)
7.2.4 Asymptotic theory
120(1)
7.2.5 Finite-sample properties and use of the asymptotics
121(1)
7.2.6 A bootstrap approach to testing
122(1)
7.3 Testing for specific functional forms
123(6)
7.3.1 Tests based on residuals
124(3)
7.3.2 Conditional mean and conditional variance testing
127(2)
7.3.3 Continuous time
129(1)
7.4 Selecting lags
129(4)
7.5 Testing for additivity and interaction
133(5)
7.5.1 Testing in additive models
133(3)
7.5.2 A simulated example
136(2)
7.6 Tests for partial linearity and semiparametric modelling
138(2)
7.7 Tests of independence
140(22)
7.7.1 Traditional tests
140(1)
7.7.2 Rank correlation
141(2)
7.7.3 Frequency based tests
143(1)
7.7.4 BDS test
143(2)
7.7.5 Distribution based tests of independence
145(5)
7.7.6 Generalized spectrum and tests of independence
150(3)
7.7.7 Density based tests of independence
153(5)
7.7.8 Some examples of independence testing
158(4)
8 Models of conditional heteroskedasticity
162(57)
8.1 Autoregressive conditional heteroskedasticity
163(1)
8.1.1 The ARCH model
163(1)
8.2 The Generalized ARCH model
164(24)
8.2.1 Why Generalized ARCH?
164(1)
8.2.2 Families of univariate GARCH models
164(3)
8.2.3 Nonlinear GARCH
167(2)
8.2.4 Time-varying GARCH
169(1)
8.2.5 Moment structure of first-order GARCH models
170(2)
8.2.6 Moment structure of higher-order GARCH models
172(1)
8.2.7 Integrated and fractionally Integrated GARCH
172(3)
8.2.8 Stylized facts and the GARCH model
175(3)
8.2.9 Building univariate GARCH models
178(10)
8.3 Family of Exponential GARCH models
188(8)
8.3.1 Moment structure of EGARCH model
189(1)
8.3.2 Stylized facts and the EGARCH model
190(1)
8.3.3 Building EGARCH models
191(5)
8.4 The Autoregressive Stochastic Volatility model
196(3)
8.4.1 Definition
196(1)
8.4.2 Moment structure of ARSV models
197(1)
8.4.3 Stylized facts and the stochastic volatility model
198(1)
8.4.4 Estimation of ARSV models
198(1)
8.4.5 Comparing the ARSV model with GARCH
199(1)
8.5 GARCH-in-Mean model
199(1)
8.6 Realized volatility
200(2)
8.7 Multivariate GARCH models
202(17)
8.7.1 General multivariate GARCH model
202(1)
8.7.2 Link to random coefficient models
203(1)
8.7.3 Constant Conditional Correlation GARCH
204(2)
8.7.4 Testing the constant correlation assumption and the Dynamic Conditional Correlation model
206(3)
8.7.5 Other extensions to the CCC-GARCH model
209(2)
8.7.6 The BEKK-GARCH model
211(2)
8.7.7 Factor GARCH models
213(6)
9 Time-varying parameters and state space models
219(33)
9.1 Introduction
219(2)
9.2 Linear state space models
221(2)
9.3 Time-varying parameter models
223(1)
9.4 Nonlinear state space models
224(11)
9.4.1 Extended Kalman filter
225(1)
9.4.2 Kitagawa's grid approximation
226(2)
9.4.3 Monte Carlo methods
228(1)
9.4.4 Particle filters
229(2)
9.4.5 Approximating with a Gaussian density
231(4)
9.5 Hidden Markov chains and regimes
235(7)
9.5.1 Hidden Markov chains
235(3)
9.5.2 Mixture models
238(4)
9.6 Estimating parameters
242(10)
9.6.1 Stationarity
242(3)
9.6.2 Identification
245(1)
9.6.3 Estimation in linear models
245(2)
9.6.4 The nonlinear case
247(3)
9.6.5 Estimation in hidden Markov and mixture models
250(2)
10 Nonparametric models
252(27)
10.1 Additive models
252(17)
10.1.1 Estimation in purely additive models
255(1)
10.1.2 Marginal integration
255(2)
10.1.3 Backfitting and smoothed backfitting
257(3)
10.1.4 Additive models with interactions
260(2)
10.1.5 A simulated example
262(1)
10.1.6 Nonparametric and additive estimation of the conditional variance function
263(6)
10.2 Some related models
269(3)
10.2.1 Functional coefficient autoregressive models
269(1)
10.2.2 Transformation of dependent variables and the ACE algorithm
269(1)
10.2.3 Regression trees, splines, and MARS
270(1)
10.2.4 Quantile regression
270(2)
10.3 Semiparametric models
272(5)
10.3.1 Index models
273(1)
10.3.2 Projection pursuit regression
274(2)
10.3.3 Partially linear models
276(1)
10.4 Robust and adaptive estimation
277(2)
11 Nonlinear and nonstationary models
279(28)
11.1 Long memory models
279(6)
11.2 Linear unit root models
285(3)
11.3 Vector autoregressive processes and linear cointegration
288(2)
11.4 Nonlinear I(1) processes
290(3)
11.5 Nonlinear error correction models
293(9)
11.6 Parametric nonlinear regression
297(5)
11.7 Nonparametric estimation in a nonlinear cointegration type framework
302(2)
11.8 Stochastic unit root models
304(3)
12 Algorithms for estimating parametric nonlinear models
307(22)
12.1 Optimization without derivatives
308(9)
12.1.1 Grid and line searches
308(1)
12.1.2 Conjugate directions
309(2)
12.1.3 Simulated annealing
311(3)
12.1.4 Evolutionary algorithms
314(3)
12.2 Methods requiring derivatives
317(7)
12.2.1 Gradient methods
317(5)
12.2.2 Variable metric methods
322(2)
12.3 Other methods
324(5)
12.3.1 EM algorithm
324(2)
12.3.2 Sequential estimation for neural networks
326(3)
13 Basic nonparametric estimates
329(15)
13.1 Density estimation
329(5)
13.1.1 Kernel estimation
329(2)
13.1.2 Bias and variance reduction
331(2)
13.1.3 Choice of bandwidth
333(1)
13.1.4 Variable bandwidth and nearest neighbour estimation
333(1)
13.1.5 Multivariate density estimation
334(1)
13.2 Nonparametric regression estimation
334(10)
13.2.1 Kernel regression estimation
335(2)
13.2.2 Local polynomial estimation
337(1)
13.2.3 Bias, convolution, and higher-order kernels
338(1)
13.2.4 Nearest neighbour estimation
339(2)
13.2.5 Splines and MARS
341(1)
13.2.6 Series expansion
341(1)
13.2.7 Choice of bandwidth for nonparametric regression
342(2)
14 Forecasting from nonlinear models
344(20)
14.1 Introduction
344(1)
14.2 Conditional mean forecasts from parametric models
345(6)
14.2.1 Analytical point forecasts
345(2)
14.2.2 Numerical techniques in forecasting
347(4)
14.3 Forecasting with nonparametric models
351(3)
14.4 Forecast accuracy
354(2)
14.5 The usefulness of forecasts from nonlinear models
356(5)
14.6 Forecasting volatility
361(1)
14.7 Overview of forecasting from nonlinear models
362(2)
15 Nonlinear impulse responses
364(6)
15.1 Generalized impulse response function
364(3)
15.2 Graphical representation
367(3)
16 Building nonlinear models
370(82)
16.1 General considerations
370(1)
16.2 Nonparametric and semiparametric models
371(4)
16.3 Building smooth transition regression models
375(43)
16.3.1 The three stages of the modelling procedure
375(1)
16.3.2 Specification
376(4)
16.3.3 Estimation of parameters
380(1)
16.3.4 Evaluation
381(8)
16.3.5 Graphical tools for characterizing the dynamic behaviour of the STAR model
389(1)
16.3.6 Examples
390(28)
16.4 Building switching regression models
418(16)
16.4.1 Specification
419(3)
16.4.2 Estimation and evaluation
422(1)
16.4.3 Examples
423(11)
16.5 Building artificial neural network models
434(11)
16.5.1 Specification
435(2)
16.5.2 Estimation
437(1)
16.5.3 Evaluation
438(1)
16.5.4 Alternative modelling approaches
439(1)
16.5.5 Examples
439(6)
16.6 Two forecast comparisons
445(7)
16.6.1 Forecasting Wolf's annual sunspot numbers
445(3)
16.6.2 Forecasting the monthly US unemployment rate
448(4)
17 Other topics
452(18)
17.1 Aggregation
452(6)
17.2 Seasonality
458(7)
17.2.1 Time-varying seasonality
458(5)
17.2.2 Temporal aggregation and time-varying seasonality
463(1)
17.2.3 Nonlinear filters in seasonal adjustment
464(1)
17.3 Outliers and nonlinearity
465(5)
17.3.1 What is an outlier?
465(1)
17.3.2 Model-based definitions
466(4)
Bibliography 470(67)
Author Index 537(12)
General Index 549
Timo Teräsvirta received his DPolSc (Econometrics) from the University of Helsinki in 1970. He has been Senior Research Fellow of the Academy of Finland (1972-76), Professor of Statistics at the University of Helsinki (1976-80), Visiting Scholar at CORE, Louvain-la-Neuve, (1978-79), Research Fellow at the Research Institute of the Finnish Economy (1980-89), Research Fellow at the Norges Bank, (1992-93, 1994, 2000), and Professor of Econometrics at the Stockholm School of Economics, (1994-2006). He has been Visiting Professor to several universities, including the University of California, San Diego, the University of Technology, Sydney, the Central European University, Budapest, and the Hanken School of Economics, Helsinki. Teräsvirta is an elected member of the International Statistical Institute, the Finnish Society of Sciences and Letters, Helsinki, and the Royal Academy of Sciences, Stockholm. Distinguished Author of Journal of Applied Econometrics and Fellow of Journal of Econometrics.

Dag Tjøstheim holds a PhD in Applied Mathematics from Princeton University, 1974. He was Research Scientist at the seismic observatory NORSAR (1974-77) and Associate Professor at the Norwegian Business School (1977-80). He was Visiting Professor at the University of North Carolina, Chapel Hill (1983-84) and at the University of California, San Diego (1990-91). He has been working on time series and related areas in spatial processes including econometrics, fishery statistics, seismology and meteorology. Tjøstheim has served as main editor of the Scandinavian Journal of Statistics, and as Associate Editor of Bernoulli, Journal of the Royal Statistical Society Series B, and Journal of Time Series Analysis. He is the recipient of the Tjalling Koopmans Prize in Econometric Theory 1999-2002 and the Norwegian Sverdrup Prize 2009. He is elected member of the International Statistical Statistical Institute and the Norwegian Academy of Science.

Clive W. J. Granger was Professor Emeritus at the University of California, San Diego. In 2003, he was awarded the Nobel Memorial Prize in Economic Sciences for fundamental discoveries in the analysis of time series data.