Muutke küpsiste eelistusi

E-raamat: Linear and Nonlinear Optimization

  • Formaat - PDF+DRM
  • Hind: 110,53 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This book is a monograph on Linear and Nonlinear Optimization intended for graduate and advanced undergraduate students in Operations Research, and it is at once both literate and mathematically strong, yet requires no prior course in optimization.  The book is divided into two parts – one on Linear Programming, one on Nonlinear Programming – and covers in individual chapters LP Models and Applications; Linear Equations and Inequalities; The Simplex Algorithm; Simplex Algorithm Continued; Duality and the Dual Simplex Algorithm; Postoptimality Analyses; Computational Considerations; Nonlinear (NLP) Models and Applications; Unconstrained Optimization; Descent Methods; Optimality Conditions; Problems with Linear Constraints; Problems with Nonlinear Constraints; Interior-Point Methods; and an Appendix covering Mathematical Concepts.  Each chapter includes end-of-chapter exercises.

The book is based on lecture notes the authors have used in numerous optimization courses the authors have taught at Stanford University.  It emphasizes modeling and numerical algorithms for optimization with continuous (not integer) variables. The discussion presents the underlying theory without always focusing on formal mathematical proofs (which can be found in cited references). Another feature of this book is its inclusion of cultural and historical matters, most often appearing among the footnotes.


"This book is a real gem. The authors do a masterful job of rigorously presenting all of the relevant theory clearly and concisely while managing to avoid unnecessary tedious mathematical details.  This is an ideal book for teaching a one or two semester masters-level course in optimization – it broadly covers linear and nonlinear programming effectively balancing modeling, algorithmic theory, computation, implementation, illuminating historical facts, and numerous interesting examples and exercises. Due to the clarity of the exposition, this book also serves as a valuable reference for self-study."

Professor Ilan Adler
IEOR Department
UC Berkeley

"A carefully crafted introduction to the main elements and applications of mathematical optimization. This volume presents the essential concepts of linear and nonlinear programming in an accessible format filled with anecdotes, examples, and exercises that bring the topic to life. The authors plumb their decades of experience in optimization to provide an enriching layer of historical context. Suitable for advanced undergraduates and masters students in management science, operations research, and related fields."

Michael P. Friedlander
IBM Professor of Computer Science
Professor of Mathematics
University of British Columbia

Arvustused

The historical notes in the book are interesting and well placed. The books list of important references is quite complete. this book is destined to become a classic in the field for beginning graduate students in optimization. (S. Zlobec, Mathematical Reviews, January, 2018)

List Of Figures
xiii
List Of Tables
xv
List Of Examples
xvii
List Of Algorithms
xxi
Preface xxiii
PART I LINEAR PROGRAMMING (LP)
1 LP Models And Applications
1(28)
1.1 A linear programming problem
1(4)
1.2 Linear programs and linear functions
5(2)
1.3 LP models and applications
7(15)
1.4 Exercises
22(7)
2 Linear Equations And Inequalities
29(32)
2.1 Equivalent forms of the LP
30(11)
2.2 Polyhedral sets
41(16)
2.3 Exercises
57(4)
3 The Simplex Algorithm
61(24)
3.1 Preliminaries
62(8)
3.2 A case where the sufficient condition is also necessary
70(1)
3.3 Seeking another BFS
71(3)
3.4 Pivoting
74(4)
3.5 Steps of the Simplex Algorithm
78(3)
3.6 Exercises
81(4)
4 The Simplex Algorithm Continued
85(32)
4.1 Finding a feasible solution
85(9)
4.2 Dealing with degeneracy
94(7)
4.3 Revised Simplex Method
101(10)
4.4 Exercises
111(6)
5 Duality And The Dual Simplex Algorithm
117(22)
5.1 Dual linear programs
117(13)
5.2 Introduction to the Dual Simplex Algorithm
130(5)
5.3 Exercises
135(4)
6 Postoptimality Analyses
139(46)
6.1 Changing model dimensions
141(12)
6.2 Ranging (Sensitivity analysis)
153(6)
6.3 Parametric linear programming
159(14)
6.4 Shadow prices and their interpretation
173(5)
6.5 Exercises
178(7)
7 Some Computational Considerations
185(48)
7.1 Problems with explicitly bounded variables
185(12)
7.2 Constructing a starting (feasible) basis
197(7)
7.3 Steepest-edge rule for incoming column selection
204(4)
7.4 Structured linear programs
208(16)
7.5 Computational complexity of the Simplex Algorithm
224(4)
7.6 Exercises
228(5)
PART II NONLINEAR PROGRAMMING (NLP)
8 NLP Models And Applications
233(38)
8.1 Nonlinear programming
233(4)
8.2 Unconstrained nonlinear programs
237(9)
8.3 Linearly constrained nonlinear programs
246(3)
8.4 Quadratic programming
249(12)
8.5 Nonlinearly constrained nonlinear programs
261(4)
8.6 Exercises
265(6)
9 Unconstrained Optimization
271(46)
9.1 Generic Optimization Algorithm
272(1)
9.2 Optimality conditions for univariate minimization
273(3)
9.3 Finite termination versus convergence of algorithms
276(2)
9.4 Zero-finding methods
278(12)
9.5 Univariate minimization
290(9)
9.6 Optimality conditions for multivariate minimization
299(2)
9.7 Methods for minimizing smooth unconstrained functions
301(4)
9.8 Steplength algorithms
305(6)
9.9 Exercises
311(6)
10 Descent Methods
317(52)
10.1 The Steepest-descent Method
318(9)
10.2 Newton's Method
327(19)
10.3 Quasi-Newton Methods
346(12)
10.4 The Conjugate-gradient Method
358(3)
10.5 Exercises
361(8)
11 Optimality Conditions
369(50)
11.1 Statement of the problem
370(1)
11.2 First-order optimality conditions
371(14)
11.3 Second-order optimality conditions
385(6)
11.4 Convex programs
391(4)
11.5 Elementary duality theory for nonlinear programming ...
395(13)
11.6 Exercises
408(11)
12 Problems With Linear Constraints
419(50)
12.1 Linear equality constraints
419(12)
12.2 Methods for computing Z
431(4)
12.3 Linear inequality constraints
435(6)
12.4 Active Set Methods
441(5)
12.5 Special cases
446(17)
12.6 Exercises
463(6)
13 Problems With Nonlinear Constraints
469(48)
13.1 Nonlinear equality constraints
469(8)
13.2 Nonlinear inequality constraints
477(3)
13.3 Overview of algorithm design
480(2)
13.4 Penalty-function Methods
482(7)
13.5 Reduced-gradient and Gradient-projection Methods
489(2)
13.6 Augmented Lagrangian Methods
491(10)
13.7 Projected Lagrangian Methods
501(3)
13.8 Sequential Quadratic Programming (SQP) Methods
504(8)
13.9 Exercises
512(5)
14 Interior-Point Methods
517(20)
14.1 Barrier-function methods
518(7)
14.2 Primal barrier-function method for linear programs
525(4)
14.3 Primal-Dual barrier function for linear programs
529(4)
14.4 Exercises
533(4)
Appendix
537(48)
A Some Mathematical Concepts
537(48)
A.1 Basic terminology and notation from set theory
537(3)
A.2 Norms and metrics
540(4)
A.3 Properties of vectors and matrices
544(4)
A.4 Properties of sets in Rn
548(6)
A.5 Properties of functions on Rn
554(7)
A.6 Convex sets
561(5)
A.7 Properties of convex functions
566(8)
A.8 Conjugate duality for convex programs
574(8)
A.9 Exercises
582(3)
Glossary Of Notation 585(2)
Bibliography 587
Index 603
Richard W. Cottle is a Professor Emeritus from the Department of Management Science and Engineering at Stanford University. He received the degrees of A.B. and A.M. from Harvard University and the Ph.D. from the University of California at Berkeley, all three in mathematics. Under the supervision of George B. Dantzig, Cottle wrote a dissertation in which the linear and nonlinear complementarity problems were introduced. Upon completion of his doctoral studies in Berkeley, Cottle became a member of the technical staff of Bell Telephone Laboratories in Holmdel, New Jersey. Two years later, he joined the operations research faculty at Stanford University where he remained until his retirement in 2005. For nearly 40 years at Stanford, Cottle taught at the undergraduate, master's, and doctoral levels in a variety of optimization courses including linear and nonlinear programming, complementarity and equilibrium programming, and matrix theory. (The present volume is an outgrowth of one such course.) Most of Cottle's research lies within these fields. A former Editor-in-Chief of the journals Mathematical Programming and Mathematical Programming Study, Richard Cottle is well known for The Linear Complementarity Problem, a Lanchester Prize winning monograph he co-authored with two of his former doctoral students, Jong-Shi Pang and Richard E. Stone. In retirement he remains active in research and writing. Mukund N. Thapa is the President & CEO of Optical Fusion, Inc., and President of Stanford Business Software, Inc. He received a bachelor of technology degree in metallurgical engineering from the Indian Institute of Technology, Bombay. His Bachelor's thesis was on operations research techniques in iron and steel making. Later he obtained M.S. and Ph.D. degrees in operations research from Stanford University in 1981. His Ph.D. thesis was concerned with developing specialized algorithms for solving largescale unconstrained nonlinear minimization problems. By profession he is a software developer who produces commercial software products as well as commercial-quality custom software. Since 1978, Dr. Thapa has been applying the theory of operations research, statistics, and computer science to develop efficient, practical, and usable solutions to a variety of problems. At Optical Fusion, Inc., Dr. Thapa architected the development of a multi-point videoconferencing system for use over all IP-based networks. He holds several patents in the area. The feature-rich system focuses primarily on the needs of users and allows corporate users to seamlessly integrate conferencingin everyday business interactions. At Stanford Business Software, Dr. Thapa, ensured that the company produces high-quality turnkey software for clients. His expert knowledge of user friendly interfaces, databases, computer science, and modular software design plays an important role in making the software practical and robust. His specialty is the application of numerical analysis methodology to solve mathematical optimization problems. An experienced modeler, he is often asked by clients to consult, prepare analyses, and to write position papers. At the Department of Management Science and Engineering, Stanford University, Dr. Thapa has taught graduate-level courses in mathematical programming computation and numerical methods of linear programming. He is best known for his books with George B. Dantzig: Linear Programming 1: Introduction and Linear Programming 2: Theory and Extensions.