Muutke küpsiste eelistusi

E-raamat: Numerical Methods and Optimization: An Introduction

(University of Florida, Gainesville, USA), (Texas A&M University, College Station, USA)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 110,49 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

For students in industrial and systems engineering (ISE) and operations research (OR) to understand optimization at an advanced level, they must first grasp the analysis of algorithms, computational complexity, and other concepts and modern developments in numerical methods. Satisfying this prerequisite, Numerical Methods and Optimization: An Introduction combines the materials from introductory numerical methods and introductory optimization courses into a single text. This classroom-tested approach enriches a standard numerical methods syllabus with optional chapters on numerical optimization and provides a valuable numerical methods background for students taking an introductory OR or optimization course.

The first part of the text introduces the necessary mathematical background, the digital representation of numbers, and different types of errors associated with numerical methods. The second part explains how to solve typical problems using numerical methods. Focusing on optimization methods, the final part presents basic theory and algorithms for linear and nonlinear optimization.

The book assumes minimal prior knowledge of the topics. Taking a rigorous yet accessible approach to the material, it includes some mathematical proofs as samples of rigorous analysis but in most cases, uses only examples to illustrate the concepts. While the authors provide a MATLAB® guide and code available for download, the book can be used with other software packages.

Arvustused

"The book is in most parts very well developed and is served by nice illustrations, a fluid style of writing, and a layout that makes it easy to read. [ it will] serve well its purpose of bridging the gap between numerical analysis, operations research, and mathematical optimization for undergraduate students in the applied sciences." Mathematical Reviews, August 2014

"If you are looking for an enjoyable and useful introduction to the basic topics of numerical methods and optimization, this is the right text to read. The authors are not only experienced lecturers but also active researchers in this area. They present the basic topics of numerical methods and optimization in an easy-to-follow, yet rigorous manner. In particular, they gently introduce some important topics, such as computational complexity, which are usually unavailable in textbooks on optimization for engineers. The authors occasionally turn to mathematical humor (such as There are 10 types of peoplethose who understand binary, and those who don't) to illustrate some material in the text. This informal contact with the reader exemplifies the engaging style of exposition characteristic of this excellent book." Oleg Burdakov, Linkoeping University, Sweden

I Basics
1(52)
1 Preliminaries
3(34)
1.1 Sets and Functions
3(3)
1.2 Fundamental Theorem of Algebra
6(1)
1.3 Vectors and Linear (Vector) Spaces
7(5)
1.3.1 Vector norms
10(2)
1.4 Matrices and Their Properties
12(13)
1.4.1 Matrix addition and scalar multiplication
12(1)
1.4.2 Matrix multiplication
13(1)
1.4.3 The transpose of a matrix
14(1)
1.4.4 Triangular and diagonal matrices
15(1)
1.4.5 Determinants
16(1)
1.4.6 Trace of a matrix
17(1)
1.4.7 Rank of a matrix
18(1)
1.4.8 The inverse of a nonsingular matrix
18(1)
1.4.9 Eigenvalues and eigenvectors
19(3)
1.4.10 Quadratic forms
22(2)
1.4.11 Matrix norms
24(1)
1.5 Preliminaries from Real and Functional Analysis
25(12)
1.5.1 Closed and open sets
26(1)
1.5.2 Sequences
26(1)
1.5.3 Continuity and differentiability
27(3)
1.5.4 Big O and little o notations
30(1)
1.5.5 Taylor's theorem
31(6)
2 Numbers and Errors
37(16)
2.1 Conversion between Different Number Systems
39(5)
2.1.1 Conversion of integers
40(2)
2.1.2 Conversion of fractions
42(2)
2.2 Floating Point Representation of Numbers
44(1)
2.3 Definitions of Errors
45(2)
2.4 Round-off Errors
47(6)
2.4.1 Rounding and chopping
47(1)
2.4.2 Arithmetic operations
48(1)
2.4.3 Subtractive cancellation and error propagation
49(4)
II Numerical Methods for Standard Problems
53(106)
3 Elements of Numerical Linear Algebra
55(32)
3.1 Direct Methods for Solving Systems of Linear Equations
57(12)
3.1.1 Solution of triangular systems of linear equations
57(2)
3.1.2 Gaussian elimination
59(3)
3.1.2.1 Pivoting strategies
62(1)
3.1.3 Gauss-Jordan method and matrix inversion
63(3)
3.1.4 Triangular factorization
66(3)
3.2 Iterative Methods for Solving Systems of Linear Equations
69(6)
3.2.1 Jacobi method
70(2)
3.2.2 Gauss-Seidel method
72(2)
3.2.3 Application: input-output models in economics
74(1)
3.3 Overdetermined Systems and Least Squares Solution
75(2)
3.3.1 Application: linear regression
76(1)
3.4 Stability of a Problem
77(1)
3.5 Computing Eigenvalues and Eigenvectors
78(9)
3.5.1 The power method
79(1)
3.5.2 Application: ranking methods
80(7)
4 Solving Equations
87(26)
4.1 Fixed Point Method
88(4)
4.2 Bracketing Methods
92(7)
4.2.1 Bisection method
93(1)
4.2.1.1 Convergence of the bisection method
93(2)
4.2.1.2 Intervals with multiple roots
95(1)
4.2.2 Regula-falsi method
96(2)
4.2.3 Modified regula-falsi method
98(1)
4.3 Newton's Method
99(6)
4.3.1 Convergence rate of Newton's method
103(2)
4.4 Secant Method
105(1)
4.5 Solution of Nonlinear Systems
106(7)
4.5.1 Fixed point method for systems
106(1)
4.5.2 Newton's method for systems
107(6)
5 Polynomial Interpolation
113(14)
5.1 Forms of Polynomials
115(1)
5.2 Polynomial Interpolation Methods
116(4)
5.2.1 Lagrange method
117(1)
5.2.2 The method of undetermined coefficients
118(1)
5.2.3 Newton's method
118(2)
5.3 Theoretical Error of Interpolation and Chebyshev Polynomials
120(7)
5.3.1 Properties of Chebyshev polynomials
122(5)
6 Numerical Integration
127(14)
6.1 Trapezoidal Rule
129(2)
6.2 Simpson's Rule
131(1)
6.3 Precision and Error of Approximation
132(2)
6.4 Composite Rules
134(3)
6.4.1 The composite trapezoidal rule
134(1)
6.4.2 Composite Simpson's rule
135(2)
6.5 Using Integrals to Approximate Sums
137(4)
7 Numerical Solution of Differential Equations
141(18)
7.1 Solution of a Differential Equation
142(1)
7.2 Taylor Series and Picard's Methods
143(2)
7.3 Euler's Method
145(2)
7.3.1 Discretization errors
147(1)
7.4 Runge-Kutta Methods
147(5)
7.4.1 Second-order Runge-Kutta methods
148(3)
7.4.2 Fourth-order Runge-Kutta methods
151(1)
7.5 Systems of Differential Equations
152(3)
7.6 Higher-Order Differential Equations
155(4)
III Introduction to Optimization
159(228)
8 Basic Concepts
161(24)
8.1 Formulating an Optimization Problem
161(3)
8.2 Mathematical Description
164(2)
8.3 Local and Global Optimality
166(2)
8.4 Existence of an Optimal Solution
168(1)
8.5 Level Sets and Gradients
169(4)
8.6 Convex Sets, Functions, and Problems
173(12)
8.6.1 First-order characterization of a convex function
177(2)
8.6.2 Second-order characterization of a convex function
179(6)
9 Complexity Issues
185(26)
9.1 Algorithms and Complexity
185(4)
9.2 Average Running Time
189(1)
9.3 Randomized Algorithms
190(1)
9.4 Basics of Computational Complexity Theory
191(7)
9.4.1 Class NP
193(1)
9.4.2 P vs. NP
193(1)
9.4.3 Polynomial time reducibility
194(1)
9.4.4 NP-complete and NP-hard problems
195(3)
9.5 Complexity of Local Optimization
198(5)
9.6 Optimal Methods for Nonlinear Optimization
203(8)
9.6.1 Classes of methods
203(1)
9.6.2 Establishing lower complexity bounds for a class of methods
204(2)
9.6.3 Defining an optimal method
206(5)
10 Introduction to Linear Programming
211(24)
10.1 Formulating a Linear Programming Model
211(2)
10.1.1 Defining the decision variables
211(1)
10.1.2 Formulating the objective function
212(1)
10.1.3 Specifying the constraints
212(1)
10.1.4 The complete linear programming formulation
213(1)
10.2 Examples of LP Models
213(8)
10.2.1 A diet problem
213(1)
10.2.2 A resource allocation problem
214(1)
10.2.3 A scheduling problem
215(2)
10.2.4 A mixing problem
217(2)
10.2.5 A transportation problem
219(1)
10.2.6 A production planning problem
220(1)
10.3 Practical Implications of Using LP Models
221(1)
10.4 Solving Two-Variable LPs Graphically
222(7)
10.5 Classification of LPs
229(6)
11 The Simplex Method for Linear Programming
235(46)
11.1 The Standard Form of LP
235(2)
11.2 The Simplex Method
237(14)
11.2.1 Step 1
239(3)
11.2.2 Step 2
242(2)
11.2.3 Recognizing optimality
244(1)
11.2.4 Recognizing unbounded LPs
244(1)
11.2.5 Degeneracy and cycling
245(4)
11.2.6 Properties of LP dictionaries and the simplex method
249(2)
11.3 Geometry of the Simplex Method
251(3)
11.4 The Simplex Method for a General LP
254(12)
11.4.1 The two-phase simplex method
259(5)
11.4.2 The big-M method
264(2)
11.5 The Fundamental Theorem of LP
266(1)
11.6 The Revised Simplex Method
266(10)
11.7 Complexity of the Simplex Method
276(5)
12 Duality and Sensitivity Analysis in Linear Programming
281(36)
12.1 Defining the Dual LP
281(6)
12.1.1 Forming the dual of a general LP
284(3)
12.2 Weak Duality and the Duality Theorem
287(2)
12.3 Extracting an Optimal Solution of the Dual LP from an Optimal Tableau of the Primal LP
289(1)
12.4 Correspondence between the Primal and Dual LP Types
290(1)
12.5 Complementary Slackness
291(3)
12.6 Economic Interpretation of the Dual LP
294(2)
12.7 Sensitivity Analysis
296(21)
12.7.1 Changing the objective function coefficient of a basic variable
302(1)
12.7.2 Changing the objective function coefficient of a nonbasic variable
303(2)
12.7.3 Changing the column of a nonbasic variable
305(1)
12.7.4 Changing the right-hand side
305(2)
12.7.5 Introducing a new variable
307(1)
12.7.6 Introducing a new constraint
308(2)
12.7.7 Summary
310(7)
13 Unconstrained Optimization
317(34)
13.1 Optimality Conditions
317(6)
13.1.1 First-order necessary conditions
317(3)
13.1.2 Second-order optimality conditions
320(2)
13.1.3 Using optimality conditions for solving optimization problems
322(1)
13.2 Optimization Problems with a Single Variable
323(4)
13.2.1 Golden section search
323(2)
13.2.1.1 Fibonacci search
325(2)
13.3 Algorithmic Strategies for Unconstrained Optimization
327(1)
13.4 Method of Steepest Descent
328(5)
13.4.1 Convex quadratic case
330(1)
13.4.2 Global convergence of the steepest descent method
331(2)
13.5 Newton's Method
333(3)
13.5.1 Rate of convergence
334(1)
13.5.2 Guaranteeing the descent
335(1)
13.5.3 Levenberg-Marquardt method
335(1)
13.6 Conjugate Direction Method
336(6)
13.6.1 Conjugate direction method for convex quadratic problems
337(3)
13.6.2 Conjugate gradient algorithm
340(1)
13.6.2.1 Non-quadratic problems
341(1)
13.7 Quasi-Newton Methods
342(4)
13.7.1 Rank-one correction formula
344(1)
13.7.2 Other correction formulas
345(1)
13.8 Inexact Line Search
346(5)
14 Constrained Optimization
351(36)
14.1 Optimality Conditions
351(17)
14.1.1 First-order necessary conditions
351(1)
14.1.1.1 Problems with equality constraints
351(7)
14.1.1.2 Problems with inequality constraints
358(5)
14.1.2 Second-order conditions
363(1)
14.1.2.1 Problems with equality constraints
363(1)
14.1.2.2 Problems with inequality constraints
364(4)
14.2 Duality
368(3)
14.3 Projected Gradient Methods
371(6)
14.3.1 Affine scaling method for LP
374(3)
14.4 Sequential Unconstrained Minimization
377(10)
14.4.1 Penalty function methods
378(1)
14.4.2 Barrier methods
379(2)
14.4.3 Interior point methods
381(6)
Notes and References 387(2)
Bibliography 389(2)
Index 391
Sergiy Butenko, Panos M. Pardalos