Muutke küpsiste eelistusi

E-raamat: Introduction to Online Convex Optimization

Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 124,80 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

New edition of a graduate-level textbook on that focuses on online convex optimization, a machine learning framework that views optimization as a process.

In many practical applications, the environment is so complex that it is not feasible to lay out a comprehensive theoretical model and use classical algorithmic theory and/or mathematical optimization. Introduction to Online Convex Optimization presents a robust machine learning approach that contains elements of mathematical optimization, game theory, and learning theory: an optimization method that learns from experience as more aspects of the problem are observed. This view of optimization as a process has led to some spectacular successes in modeling and systems that have become part of our daily lives.

Based on the “Theoretical Machine Learning” course taught by the author at Princeton University, the second edition of this widely used graduate level text features:
  • Thoroughly updated material throughout
  • New chapters on boosting, adaptive regret, and approachability and expanded exposition on optimization
  • Examples of applications, including prediction from expert advice, portfolio selection, matrix completion and recommendation systems, SVM training, offered throughout
  • Exercises that guide students in completing parts of proofs
  • Preface xi
    Acknowledgments xv
    List of Figures
    xvii
    List of Symbols
    xix
    1 Introduction
    1(14)
    1.1 The Online Convex Optimization Setting
    1(2)
    1.2 Examples of Problems That Can Be Modeled via Online Convex Optimization
    3(4)
    1.3 A Gentle Start: Learning from Expert Advice
    7(5)
    1.4 Bibliographic Remarks
    12(1)
    1.5 Exercises
    13(2)
    2 Basic Concepts in Convex Optimization
    15(22)
    2.1 Basic Definitions and Setup
    15(4)
    2.2 Gradient Descent
    19(6)
    2.3 Constrained Gradient/Subgradient Descent
    25(2)
    2.4 Reductions to Nonsmooth and Nonstrongly Convex Functions
    27(4)
    2.5 Example: Support Vector Machine Training
    31(2)
    2.6 Bibliographic Remarks
    33(1)
    2.7 Exercises
    34(3)
    3 First-Order Algorithms for Online Convex Optimization
    37(12)
    3.1 Online Gradient Descent
    38(2)
    3.2 Lower Bounds
    40(1)
    3.3 Logarithmic Regret
    41(2)
    3.4 Application: Stochastic Gradient Descent
    43(3)
    3.5 Bibliographic Remarks
    46(1)
    3.6 Exercises
    46(3)
    4 Second-Order Methods
    49(14)
    4.1 Motivation: Universal Portfolio Selection
    49(3)
    4.2 Exp-Concave Functions
    52(1)
    4.3 Exponentially Weighted Online Convex Optimization
    53(2)
    4.4 The Online Newton Step Algorithm
    55(5)
    4.5 Bibliographic Remarks
    60(1)
    4.6 Exercises
    61(2)
    5 Regularization
    63(26)
    5.1 Regularization Functions
    64(1)
    5.2 The RFTL Algorithm and Its Analysis
    65(4)
    5.3 Online Mirror Descent
    69(3)
    5.4 Application and Special Cases
    72(2)
    5.5 Randomized Regularization
    74(7)
    5.6 Adaptive Gradient Descent
    81(5)
    5.7 Bibliographic Remarks
    86(1)
    5.8 Exercises
    87(2)
    6 Bandit Convex Optimization
    89(18)
    6.1 The Bandit Convex Optimization Setting
    89(1)
    6.2 The Multiarmed Bandit (MAB) Problem
    90(4)
    6.3 A Reduction from Limited Information to Full Information
    94(5)
    6.4 Online Gradient Descent without a Gradient
    99(2)
    6.5 Optimal Regret Algorithms for Bandit Linear Optimization
    101(4)
    6.6 Bibliographic Remarks
    105(1)
    6.7 Exercises
    106(1)
    7 Projection-Free Algorithms
    107(16)
    7.1 Review: Relevant Concepts from Linear Algebra
    107(1)
    7.2 Motivation: Recommender Systems
    108(2)
    7.3 The Conditional Gradient Method
    110(4)
    7.4 Projections versus Linear Optimization
    114(2)
    7.5 The Online Conditional Gradient Algorithm
    116(3)
    7.6 Bibliographic Remarks
    119(1)
    7.7 Exercises
    120(3)
    8 Games, Duality, and Regret
    123(10)
    8.1 Linear Programming and Duality
    124(1)
    8.2 Zero-Sum Games and Equilibria
    125(3)
    8.3 Proof of von Neumann Theorem
    128(2)
    8.4 Approximating Linear Programs
    130(1)
    8.5 Bibliographic Remarks
    131(1)
    8.6 Exercises
    131(2)
    9 Learning Theory, Generalization, and Online Convex Optimization
    133(14)
    9.1 Statistical Learning Theory
    133(5)
    9.2 Agnostic Learning Using Online Convex Optimization
    138(5)
    9.3 Learning and Compression
    143(2)
    9.4 Bibliographic Remarks
    145(1)
    9.5 Exercises
    145(2)
    10 Learning in Changing Environments
    147(16)
    10.1 A Simple Start: Dynamic Regret
    148(1)
    10.2 The Notion of Adaptive Regret
    149(2)
    10.3 Tracking the Best Expert
    151(2)
    10.4 Efficient Adaptive Regret for Online Convex Optimization
    153(2)
    10.5 Computationally Efficient Methods
    155(4)
    10.6 Bibliographic Remarks
    159(1)
    10.7 Exercises
    160(3)
    11 Boosting and Regret
    163(8)
    11.1 The Problem of Boosting
    164(1)
    11.2 Boosting by Online Convex Optimization
    165(4)
    11.3 Bibliographic Remarks
    169(1)
    11.4 Exercises
    170(1)
    12 Online Boosting
    171(10)
    12.1 Motivation: Learning from a Huge Set of Experts
    171(2)
    12.2 The Contextual Learning Model
    173(1)
    12.3 The Extension Operator
    174(1)
    12.4 The Online Boosting Method
    175(4)
    12.5 Bibliographic Remarks
    179(1)
    12.6 Exercises
    180(1)
    13 Blackwell Approachability and Online Convex Optimization
    181(10)
    13.1 Vector-Valued Games and Approachability
    181(2)
    13.2 From OCO to Approachability
    183(3)
    13.3 From Approachability to OCO
    186(3)
    13.4 Bibliographic Remarks
    189(1)
    13.5 Exercises
    190(1)
    Notes 191(2)
    References 193(14)
    Index 207