Muutke küpsiste eelistusi

E-raamat: Clinical Trials: A Methodologic Perspective

(Johns Hopkins Oncology Center)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 167,96 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

 

Presents elements of clinical trial methods that are essential in planning, designing, conducting, analyzing, and interpreting clinical trials with the goal of improving the evidence derived from these important studies

 This Third Edition builds on the texts reputation as a straightforward, detailed, and authoritative presentation of quantitative methods for clinical trials. Readers will encounter the principles of design for various types of clinical trials, and are then skillfully guided through the complete process of planning the experiment, assembling a study cohort, assessing data, and reporting results. Throughout the process, the author alerts readers to problems that may arise during the course of the trial and provides common sense solutions. All stages of therapeutic development are discussed in detail, and the methods are not restricted to a single clinical application area.

The authors bases current revisions and updates on his own experience, classroom instruction, and feedback from teachers and medical and statistical professionals involved in clinical trials. The Third Edition greatly expands its coverage, ranging from statistical principles to new and provocative topics, including alternative medicine and ethics, middle development, comparative studies, and adaptive designs. At the same time, it offers more pragmatic advice for issues such as selecting outcomes, sample size, analysis, reporting, and handling allegations of misconduct. Readers familiar with the First and Second Editions will discover revamped exercise sets; an updated and extensive reference section; new material on endpoints and the developmental pipeline, among others; and revisions of numerous sections.

In addition, this book:

Features accessible and broad coverage of statistical design methodsthe crucial building blocks of clinical trials and medical research -- now complete with new chapters on overall development, middle development, comparative studies, and adaptive designs

Teaches readers to design clinical trials that produce valid qualitative results backed by rigorous statistical methods

Contains an introduction and summary in each chapter to reinforce key points

Includes discussion questions to stimulate critical thinking and help readers understand how they can apply their newfound knowledge

Provides extensive references to direct readers to the most recent literature, and there are numerous new or revised exercises throughout the book

Clinical Trials: A Methodologic Perspective, Third Edition is a textbook accessible to advanced undergraduate students in the quantitative sciences, graduate students in public health and the life sciences, physicians training in clinical research methods, and biostatisticians and epidemiologists.

This book is accompanied by downloadable files available below under the DOWNLOADS tab. 

These files include:





MATHEMATICA program A set of downloadable files that tracks the chapters, containing code pertaining to each.





SAS PROGRAMS and DATA FILES used in the book.

The following software programs, included in the downloadables, were developed by the author, Steven Piantadosi, M.D., Ph.D:





RANDOMIZATION This program generates treatment assignments for a clinical trial using blocked stratified randomization.





CRM Implements the continual reassessment methods for dose finding clinical trials.





OPTIMAL Calculates two-stage optimal phase II designs using the Simon method.





POWER This is a power and sample size program for clinical trials. 

Executables for installing these programs can also be found at https://risccweb.csmc.edu/biostats/.

Steven Piantadosi, MD, PhD, is the Phase One Foundation Distinguished Chair and Director of the Samuel Oschin Cancer Institute, and Professor of Medicine at Cedars-Sinai Medical Center in Los Angeles, California. Dr. Piantadosi is one of the worlds leading experts in the design and analysis of clinical trials for cancer research. He has taught clinical trials methods extensively in formal courses and short venues. He has advised numerous academic programs and collaborations nationally regarding clinical trial design and conduct, and has served on external advisory boards for the National Institutes of Health and other prominent cancer programs and centers. The author of more than 260 peer-reviewed scientific articles, Dr. Piantadosi has published extensively on research results, clinical applications, and trial methodology. While his papers have contributed to many areas of oncology, he has also collaborated on diverse studies outside oncology including lung disease and degenerative neurological disease.
Preface to the Third Edition xxv
About the Companion Website xxviii
1 Preliminaries
1(9)
1.1 Introduction
1(1)
1.2 Audiences
2(1)
1.3 Scope
3(2)
1.4 Other Sources of Knowledge
5(1)
1.5 Notation and Terminology
6(3)
1.5.1 Clinical Trial Terminology
7(1)
1.5.2 Drug Development Traditionally Recognizes Four Trial Design Types
7(1)
1.5.3 Descriptive Terminology Is Better
8(1)
1.6 Examples, Data, and Programs
9(1)
1.7 Summary
9(1)
2 Clinical Trials as Research
10(33)
2.1 Introduction
10(3)
2.2 Research
13(6)
2.2.1 What Is Research?
13(1)
2.2.2 Clinical Reasoning Is Based on the Case History
14(2)
2.2.3 Statistical Reasoning Emphasizes Inference Based on Designed Data Production
16(1)
2.2.4 Clinical and Statistical Reasoning Converge in Research
17(2)
2.3 Denning Clinical Trials
19(10)
2.3.1 Mixing of Clinical and Statistical Reasoning Is Recent
19(2)
2.3.2 Clinical Trials Are Rigorously Denned
21(1)
2.3.3 Theory and Data
22(1)
2.3.4 Experiments Can Be Misunderstood
23(2)
2.3.5 Clinical Trials and the Frankenstein Myth
25(1)
2.3.6 Cavia porcellus
26(1)
2.3.7 Clinical Trials as Science
26(2)
2.3.8 Trials and Statistical Methods Fit within a Spectrum of Clinical Research
28(1)
2.4 Practicalities of Usage
29(6)
2.4.1 Predicates for a Trial
29(1)
2.4.2 Trials Can Provide Confirmatory Evidence
29(1)
2.4.3 Clinical Trials Are Reliable Albeit Unwieldy and Messy
30(1)
2.4.4 Trials Are Difficult to Apply in Some Circumstances
31(1)
2.4.5 Randomized Studies Can Be Initiated Early
32(1)
2.4.6 What Can I learn from n = 20?
33(2)
2.5 Nonexperimental Designs
35(6)
2.5.1 Other Methods Are Valid for Making Some Clinical Inferences
35(3)
2.5.2 Some Specific Nonexperimental Designs
38(2)
2.5.3 Causal Relationships
40(1)
2.5.4 Will Genetic Determinism Replace Design?
41(1)
2.6 Summary
41(1)
2.7 Questions for Discussion
41(2)
3 Why Clinical Trials Are Ethical
43(44)
3.1 Introduction
43(4)
3.1.1 Science and Ethics Share Objectives
44(2)
3.1.2 Equipoise and Uncertainty
46(1)
3.2 Duality
47(10)
3.2.1 Clinical Trials Sharpen, But Do Not Create, Duality
47(1)
3.2.2 A Gene Therapy Tragedy Illustrates Duality
48(1)
3.2.3 Research and Practice Are Convergent
48(4)
3.2.4 Hippocratic Tradition Does Not Proscribe Clinical Trials
52(2)
3.2.5 Physicians Always Have Multiple Roles
54(3)
3.3 Historically Derived Principles of Ethics
57(8)
3.3.1 Nuremberg Contributed an Awareness of the Worst Problems
57(1)
3.3.2 High-Profile Mistakes Were Made in the United States
58(1)
3.3.3 The Helsinki Declaration Was Widely Adopted
58(3)
3.3.4 Other International Guidelines Have Been Proposed
61(1)
3.3.5 Institutional Review Boards Provide Ethics Oversight
62(1)
3.3.6 Ethics Principles Relevant to Clinical Trials
63(2)
3.4 Contemporary Foundational Principles
65(7)
3.4.1 Collaborative Partnership
66(1)
3.4.2 Scientific Value
66(1)
3.4.3 Scientific Validity
66(1)
3.4.4 Fair Subject Selection
67(1)
3.4.5 Favorable Risk-Benefit
67(1)
3.4.6 Independent Review
68(1)
3.4.7 Informed Consent
68(3)
3.4.8 Respect for Subjects
71(1)
3.5 Methodologic Reflections
72(7)
3.5.1 Practice Based on Unproven Treatments Is Not Ethical
72(2)
3.5.2 Ethics Considerations Are Important Determinants of Design
74(1)
3.5.3 Specific Methods Have Justification
75(4)
3.6 Professional Conduct
79(6)
3.6.1 Advocacy
79(2)
3.6.2 Physician to Physician Communication Is Not Research
81(1)
3.6.3 Investigator Responsibilities
82(1)
3.6.4 Professional Ethics
83(2)
3.7 Summary
85(1)
3.8 Questions for Discussion
86(1)
4 Contexts for Clinical Trials
87(50)
4.1 Introduction
87(4)
4.1.1 Clinical Trial Registries
88(2)
4.1.2 Public Perception Versus Science
90(1)
4.2 Drugs
91(4)
4.2.1 Are Drugs Special?
92(1)
4.2.2 Why Trials Are Used Extensively for Drugs
93(2)
4.3 Devices
95(4)
4.3.1 Use of Trials for Medical Devices
95(2)
4.3.2 Are Devices Different from Drugs?
97(1)
4.3.3 Case Study
98(1)
4.4 Prevention
99(7)
4.4.1 The Prevention versus Therapy Dichotomy Is Over-worked
100(1)
4.4.2 Vaccines and Biologicals
101(1)
4.4.3 Ebola 2014 and Beyond
102(1)
4.4.4 A Perspective on Risk-Benefit
103(2)
4.4.5 Methodology and Framework for Prevention Trials
105(1)
4.5 Complementary and Alternative Medicine
106(10)
4.5.1 Science Is the Study of Natural Phenomena
108(1)
4.5.2 Ignorance Is Important
109(1)
4.5.3 The Essential Paradox of CAM and Clinical Trials
110(1)
4.5.4 Why Trials Have Not Been Used Extensively in CAM
111(2)
4.5.5 Some Principles for Rigorous Evaluation
113(2)
4.5.6 Historic Examples
115(1)
4.6 Surgery and Skill-Dependent Therapies
116(14)
4.6.1 Why Trials Have Been Used Less Extensively in Surgery
118(2)
4.6.2 Reasons Why Some Surgical Therapies Require Less Rigorous Study Designs
120(1)
4.6.3 Sources of Variation
121(1)
4.6.4 Difficulties of Inference
121(1)
4.6.5 Control of Observer Bias Is Possible
122(2)
4.6.6 Illustrations from an Emphysema Surgery Trial
124(6)
4.7 A Brief View of Some Other Contexts
130(5)
4.7.1 Screening Trials
130(4)
4.7.2 Diagnostic Trials
134(1)
4.7.3 Radiation Therapy
134(1)
4.8 Summary
135(1)
4.9 Questions for Discussion
136(1)
5 Measurement
137(35)
5.1 Introduction
137(3)
5.1.1 Types of Uncertainty
138(2)
5.2 Objectives
140(3)
5.2.1 Estimation Is The Most Common Objective
141(1)
5.2.2 Selection Can Also Be an Objective
141(1)
5.2.3 Objectives Require Various Scales of Measurement
142(1)
5.3 Measurement Design
143(19)
5.3.1 Mixed Outcomes and Predictors
143(1)
5.3.2 Criteria for Evaluating Outcomes
144(1)
5.3.3 Prefer Hard or Objective Outcomes
145(1)
5.3.4 Outcomes Can Be Quantitative or Qualitative
146(1)
5.3.5 Measures Are Useful and Efficient Outcomes
146(1)
5.3.6 Some Outcomes Are Summarized as Counts
147(1)
5.3.7 Ordered Categories Are Commonly Used for Severity or Toxicity
147(1)
5.3.8 Unordered Categories Are Sometimes Used
148(1)
5.3.9 Dichotomies Are Simple Summaries
148(1)
5.3.10 Measures of Risk
149(4)
5.3.11 Primary and Others
153(1)
5.3.12 Composites
154(1)
5.3.13 Event Times and Censoring
155(5)
5.3.14 Longitudinal Measures
160(1)
5.3.15 Central Review
161(1)
5.3.16 Patient Reported Outcomes
161(1)
5.4 Surrogate Outcomes
162(8)
5.4.1 Surrogate Outcomes Are Disease-Specific
164(3)
5.4.2 Surrogate Outcomes Can Make Trials More Efficient
167(1)
5.4.3 Surrogate Outcomes Have Significant Limitations
168(2)
5.5 Summary
170(1)
5.6 Questions for Discussion
171(1)
6 Random Error and Bias
172(24)
6.1 Introduction
172(9)
6.1.1 The Effects of Random and Systematic Errors Are Distinct
173(1)
6.1.2 Hypothesis Tests versus Significance Tests
174(1)
6.1.3 Hypothesis Tests Are Subject to Two Types of Random Error
175(1)
6.1.4 Type I Errors Are Relatively Easy to Control
176(1)
6.1.5 The Properties of Confidence Intervals Are Similar to Hypothesis Tests
176(1)
6.1.6 Using a one- or two-sided hypothesis test is not the right question
177(1)
6.1.7 P-Values Quantify the Type I Error
178(1)
6.1.8 Type II Errors Depend on the Clinical Difference of Interest
178(2)
6.1.9 Post Hoc Power Calculations Are Useless
180(1)
6.2 Clinical Bias
181(7)
6.2.1 Relative Size of Random Error and Bias is Important
182(1)
6.2.2 Bias Arises from Numerous Sources
182(3)
6.2.3 Controlling Structural Bias is Conceptually Simple
185(3)
6.3 Statistical Bias
188(6)
6.3.1 Selection Bias
188(4)
6.3.2 Some Statistical Bias Can Be Corrected
192(1)
6.3.3 Unbiasedness is Not the Only Desirable Attribute of an Estimator
192(2)
6.4 Summary
194(1)
6.5 Questions for Discussion
194(2)
7 Statistical Perspectives
196(21)
7.1 Introduction
196(1)
7.2 Differences in Statistical Perspectives
197(5)
7.2.1 Models and Parameters
197(1)
7.2.2 Philosophy of Inference Divides Statisticians
198(1)
7.2.3 Resolution
199(1)
7.2.4 Points of Agreement
199(3)
7.3 Frequentist
202(2)
7.3.1 Binomial Case Study
203(1)
7.3.2 Other Issues
204(1)
7.4 Bayesian
204(6)
7.4.1 Choice of a Prior Distribution Is a Source of Contention
205(1)
7.4.2 Binomial Case Study
206(3)
7.4.3 Bayesian Inference Is Different
209(1)
7.5 Likelihood
210(2)
7.5.1 Binomial Case Study
211(1)
7.5.2 Likelihood-Based Design
211(1)
7.6 Statistics Issues
212(3)
7.6.1 Perspective
212(1)
7.6.2 Statistical Procedures Are Not Standardized
213(1)
7.6.3 Practical Controversies Related to Statistics Exist
214(1)
7.7 Summary
215(1)
7.8 Questions for Discussion
216(1)
8 Experiment Design in Clinical Trials
217(37)
8.1 Introduction
217(1)
8.2 Trials As Simple Experiment Designs
218(5)
8.2.1 Design Space Is Chaotic
219(1)
8.2.2 Design Is Critical for Inference
220(1)
8.2.3 The Question Drives the Design
220(1)
8.2.4 Design Depends on the Observation Model As Well As the Biological Question
221(1)
8.2.5 Comparing Designs
222(1)
8.3 Goals of Experiment Design
223(2)
8.3.1 Control of Random Error and Bias Is the Goal
223(1)
8.3.2 Conceptual Simplicity Is Also a Goal
223(1)
8.3.3 Encapsulation of Subjectivity
224(1)
8.3.4 Leech Case Study
225(1)
8.4 Design Concepts
225(5)
8.4.1 The Foundations of Design Are Observation and Theory
226(1)
8.4.2 A Lesson from the Women's Health Initiative
227(2)
8.4.3 Experiments Use Three Components of Design
229(1)
8.5 Design Features
230(7)
8.5.1 Enrichment
231(1)
8.5.2 Replication
232(1)
8.5.3 Experimental and Observational Units
232(1)
8.5.4 Treatments and Factors
233(1)
8.5.5 Nesting
233(1)
8.5.6 Randomization
234(1)
8.5.7 Blocking
234(1)
8.5.8 Stratification
235(1)
8.5.9 Masking
236(1)
8.6 Special Design Issues
237(7)
8.6.1 Placebos
237(3)
8.6.2 Equivalence and Noninferiority
240(1)
8.6.3 Randomized Discontinuation
241(1)
8.6.4 Hybrid Designs May Be Needed for Resolving Special Questions
242(1)
8.6.5 Clinical Trials Cannot Meet Certain Objectives
242(2)
8.7 Importance of the Protocol Document
244(8)
8.7.1 Protocols Have Many Functions
244(1)
8.7.2 Deviations from Protocol Specifications are Common
245(1)
8.7.3 Protocols Are Structured, Logical, and Complete
246(6)
8.8 Summary
252(1)
8.9 Questions for Discussion
253(1)
9 The Trial Cohort
254(23)
9.1 Introduction
254(1)
9.2 Cohort Definition and Selection
255(9)
9.2.1 Eligibility and Exclusions
255(2)
9.2.2 Active Sampling and Enrichment
257(1)
9.2.3 Participation may select subjects with better prognosis
258(4)
9.2.4 Quantitative Selection Criteria Versus False Precision
262(1)
9.2.5 Comparative Trials Are Not Sensitive to Selection
263(1)
9.3 Modeling Accrual
264(3)
9.3.1 Using a Run-In Period
264(1)
9.3.2 Estimate Accrual Quantitatively
265(2)
9.4 Inclusiveness, Representation, and Interactions
267(8)
9.4.1 Inclusiveness Is a Worthy Goal
267(1)
9.4.2 Barriers Can Hinder Trial Participation
268(1)
9.4.3 Efficacy versus Effectiveness Trials
269(1)
9.4.4 Representation: Politics Blunders into Science
270(5)
9.5 Summary
275(1)
9.6 Questions for Discussion
275(2)
10 Development Paradigms
277(25)
10.1 Introduction
277(4)
10.1.1 Stages of Development
278(2)
10.1.2 Trial Design versus Development Design
280(1)
10.1.3 Companion Diagnostics in Cancer
281(1)
10.2 Pipeline Principles and Problems
281(5)
10.2.1 The Paradigm Is Not Linear
282(1)
10.2.2 Staging Allows Efficiency
282(1)
10.2.3 The Pipeline Impacts Study Design
283(1)
10.2.4 Specificity and Pressures Shape the Pipeline
283(1)
10.2.5 Problems with Trials
284(2)
10.2.6 Problems in the Pipeline
286(1)
10.3 A Simple Quantitative Pipeline
286(6)
10.3.1 Pipeline Operating Characteristics Can Be Derived
286(2)
10.3.2 Implications May Be Counterintuitive
288(1)
10.3.3 Optimization Yields Insights
288(3)
10.3.4 Overall Implications for the Pipeline
291(1)
10.4 Late Failures
292(8)
10.4.1 Generic Mistakes in Evaluating Evidence
293(1)
10.4.2 "Safety" Begets Efficacy Testing
293(1)
10.4.3 Pressure to Advance Ideas Is Unprecedented
294(1)
10.4.4 Scientists Believe Weird Things
294(1)
10.4.5 Confirmation Bias
295(1)
10.4.6 Many Biological Endpoints Are Neither Predictive nor Prognostic
296(1)
10.4.7 Disbelief Is Easier to Suspend Than Belief
296(1)
10.4.8 Publication Bias
297(1)
10.4.9 Intellectual Conflicts of Interest
297(1)
10.4.10 Many Preclinical Models Are Invalid
298(1)
10.4.11 Variation Despite Genomic Determinism
299(1)
10.4.12 Weak Evidence Is Likely to Mislead
300(1)
10.5 Summary
300(1)
10.6 Questions for Discussion
301(1)
11 Translational Clinical Trials
302(27)
11.1 Introduction
302(6)
11.1.1 Therapeutic Intent or Not?
303(1)
11.1.2 Mechanistic Trials
304(1)
11.1.3 Marker Threshold Designs Are Strongly Biased
305(3)
11.2 Inferential Paradigms
308(4)
11.2.1 Biologic Paradigm
308(2)
11.2.2 Clinical Paradigm
310(1)
11.2.3 Surrogate Paradigm
311(1)
11.3 Evidence and Theory
312(1)
11.3.1 Biological Models Are a Key to Translational Trials
313(1)
11.4 Translational Trials Defined
313(4)
11.4.1 Translational Paradigm
313(2)
11.4.2 Character and Definition
315(1)
11.4.3 Small or "Pilot" Does Not Mean Translational
316(1)
11.4.4 Hypothetical Example
316(1)
11.4.5 Nesting Translational Studies
317(1)
11.5 Information From Translational Trials
317(11)
11.5.1 Surprise Can Be Defined Mathematically
318(1)
11.5.2 Parameter Uncertainty Versus Outcome Uncertainty
318(1)
11.5.3 Expected Surprise and Entropy
319(2)
11.5.4 Information/Entropy Calculated From Small Samples Is Biased
321(1)
11.5.5 Variance of Information/Entropy
322(2)
11.5.6 Sample Size for Translational Trials
324(3)
11.5.7 Validity
327(1)
11.6 Summary
328(1)
11.7 Questions for Discussion
328(1)
12 Early Development and Dose-Finding
329(41)
12.1 Introduction
329(1)
12.2 Basic Concepts
330(3)
12.2.1 Therapeutic Intent
330(1)
12.2.2 Feasibility
331(1)
12.2.3 Dose versus Efficacy
332(1)
12.3 Essential Concepts for Dose versus Risk
333(5)
12.3.1 What Does the Terminology Mean?
333(1)
12.3.2 Distinguish Dose-Risk From Dose-Efficacy
334(1)
12.3.3 Dose Optimality Is a Design Definition
335(1)
12.3.4 Unavoidable Subjectivity
335(1)
12.3.5 Sample Size Is an Outcome of Dose-Finding Studies
336(1)
12.3.6 Idealized Dose-Finding Design
336(2)
12.4 Dose-Ranging
338(6)
12.4.1 Some Historical Designs
338(1)
12.4.2 Typical Dose-Ranging Design
339(1)
12.4.3 Operating Characteristics Can Be Calculated
340(3)
12.4.4 Modifications, Strengths, and Weaknesses
343(1)
12.5 Dose-Finding Is Model Based
344(10)
12.5.1 Mathematical Models Facilitate Inferences
345(1)
12.5.2 Continual Reassessment Method
345(4)
12.5.3 Pharmacokinetic Measurements Might Be Used to Improve CRM Dose Escalations
349(1)
12.5.4 The CRM Is an Attractive Design to Criticize
350(1)
12.5.5 CRM Clinical Examples
350(1)
12.5.6 Dose Distributions
351(1)
12.5.7 Estimation with Overdose Control (EWOC)
351(2)
12.5.8 Randomization in Early Development?
353(1)
12.5.9 Phase I Data Have Other Uses
353(1)
12.6 General Dose-Finding Issues
354(12)
12.6.1 The General Dose-Finding Problem Is Unsolved
354(2)
12.6.2 More than One Drug
356(5)
12.6.3 More than One Outcome
361(2)
12.6.4 Envelope Simulation
363(3)
12.7 Summary
366(2)
12.8 Questions for Discussion
368(2)
13 Middle Development
370(27)
13.1 Introduction
370(2)
13.1.1 Estimate Treatment Effects
371(1)
13.2 Characteristics of Middle Development
372(3)
13.2.1 Constraints
373(1)
13.2.2 Outcomes
374(1)
13.2.3 Focus
375(1)
13.3 Design Issues
375(4)
13.3.1 Choices in Middle Development
375(1)
13.3.2 When to Skip Middle Development
376(1)
13.3.3 Randomization
377(1)
13.3.4 Other Design Issues
378(1)
13.4 Middle Development Distills True Positives
379(2)
13.5 Futility and Nonsuperiority Designs
381(4)
13.5.1 Asymmetry in Error Control
382(1)
13.5.2 Should We Control False Positives or False Negatives?
383(1)
13.5.3 Futility Design Example
384(1)
13.5.4 A Conventional Approach to Futility
385(1)
13.6 Dose-Efficacy Questions
385(1)
13.7 Randomized Comparisons
386(6)
13.7.1 When to Perform an Error-Prone Comparative Trial
387(1)
13.7.2 Examples
388(1)
13.7.3 Randomized Selection
389(3)
13.8 Cohort Mixtures
392(3)
13.9 Summary
395(1)
13.10 Questions for Discussion
396(1)
14 Comparative Trials
397(16)
14.1 Introduction
397(1)
14.2 Elements of Reliability
398(4)
14.2.1 Key Features
399(1)
14.2.2 Flexibilities
400(1)
14.2.3 Other Design Issues
400(2)
14.3 Biomarker-Based Comparative Designs
402(6)
14.3.1 Biomarkers Are Diverse
402(2)
14.3.2 Enrichment
404(1)
14.3.3 Biomarker-Stratified
404(1)
14.3.4 Biomarker-Strategy
405(1)
14.3.5 Multiple-Biomarker Signal-Finding
406(1)
14.3.6 Prospective-Retrospective Evaluation of a Biomarker
407(1)
14.3.7 Master Protocols
407(1)
14.4 Some Special Comparative Designs
408(3)
14.4.1 Randomized Discontinuation
408(1)
14.4.2 Delayed Start
409(1)
14.4.3 Cluster Randomization
410(1)
14.4.4 Non Inferiority
410(1)
14.4.5 Multiple Agents versus Control
410(1)
14.5 Summary
411(1)
14.6 Questions for Discussion
412(1)
15 Adaptive Design Features
413(17)
15.1 Introduction
413(5)
15.1.1 Advantages and Disadvantages of AD
414(2)
15.1.2 Design Adaptations Are Tools, Not a Class
416(1)
15.1.3 Perspective on Bayesian Methods
417(1)
15.1.4 The Pipeline Is the Main Adaptive Tool
417(1)
15.2 Some Familiar Adaptations
418(5)
15.2.1 Dose-Finding Is Adaptive
418(1)
15.2.2 Adaptive Randomization
418(4)
15.2.3 Staging is Adaptive
422(1)
15.2.4 Dropping a Treatment Arm or Subset
423(1)
15.3 Biomarker Adaptive Trials
423(2)
15.4 Re-Designs
425(2)
15.4.1 Sample Size Re-Estimation Requires Caution
425(2)
15.5 Seamless Designs
427(1)
15.6 Barriers to the Use of AD
428(1)
15.7 Adaptive Design Case Study
428(1)
15.8 Summary
429(1)
15.9 Questions for Discussion
429(1)
16 Sample Size and Power
430(62)
16.1 Introduction
430(1)
16.2 Principles
431(5)
16.2.1 What Is Precision?
432(1)
16.2.2 What Is Power?
433(1)
16.2.3 What Is Evidence?
434(1)
16.2.4 Sample Size and Power Calculations Are Approximations
435(1)
16.2.5 The Relationship between Power/Precision and Sample Size Is Quadratic
435(1)
16.3 Early Developmental Trials
436(2)
16.3.1 Translational Trials
436(1)
16.3.2 Dose-Finding Trials
437(1)
16.4 Simple Estimation Designs
438(13)
16.4.1 Confidence Intervals for a Mean Provide a Sample Size Approach
438(2)
16.4.2 Estimating Proportions Accurately
440(1)
16.4.3 Exact Binomial Confidence Limits Are Helpful
441(3)
16.4.4 Precision Helps Detect Improvement
444(2)
16.4.5 Bayesian Binomial Confidence Intervals
446(1)
16.4.6 A Bayesian Approach Can Use Prior Information
447(3)
16.4.7 Likelihood-Based Approach for Proportions
450(1)
16.5 Event Rates
451(4)
16.5.1 Confidence Intervals for Event Rates Can Determine Sample Size
451(3)
16.5.2 Likelihood-Based Approach for Event Rates
454(1)
16.6 Staged Studies
455(2)
16.6.1 Ineffective or Unsafe Treatments Should Be Discarded Early
455(1)
16.6.2 Two-Stage Designs Increase Efficiency
456(1)
16.7 Comparative Trials
457(21)
16.7.1 How to Choose Type I and II Error Rates?
459(1)
16.7.2 Comparisons Using the r-Test Are a Good Learning Example
459(3)
16.7.3 Likelihood-Based Approach
462(1)
16.7.4 Dichotomous Responses Are More Complex
463(1)
16.7.5 Hazard Comparisons Yield Similar Equations
464(3)
16.7.6 Parametric and Nonparametric Equations Are Connected
467(1)
16.7.7 Accommodating Unbalanced Treatment Assignments
467(2)
16.7.8 A Simple Accrual Model Can Also Be Incorporated
469(2)
16.7.9 Stratification
471(1)
16.7.10 Noninferiority
472(6)
16.8 Expanded Safety Trials
478(3)
16.8.1 Model Rare Events with the Poisson Distribution
479(1)
16.8.2 Likelihood Approach for Poisson Rates
479(2)
16.9 Other Considerations
481(8)
16.9.1 Cluster Randomization Requires Increased Sample Size
481(1)
16.9.2 Simple Cost Optimization
482(1)
16.9.3 Increase the Sample Size for Nonadherence
482(3)
16.9.4 Simulated Lifetables Can Be a Simple Design Tool
485(1)
16.9.5 Sample Size for Prognostic Factor Studies
486(1)
16.9.6 Computer Programs Simplify Calculations
487(1)
16.9.7 Simulation Is a Powerful and Flexible Design Alternative
487(1)
16.9.8 Power Curves Are Sigmoid Shaped
488(1)
16.10 Summary
489(1)
16.11 Questions for Discussion
490(2)
17 Treatment Allocation
492(30)
17.1 Introduction
492(2)
17.1.1 Balance and Bias Are Independent
493(1)
17.2 Randomization
494(6)
17.2.1 Heuristic Proof of the Value of Randomization
495(2)
17.2.2 Control the Influence of Unknown Factors
497(1)
17.2.3 Haphazard Assignments Are Not Random
498(1)
17.2.4 Simple Randomization Can Yield Imbalances
499(1)
17.3 Constrained Randomization
500(4)
17.3.1 Blocking Improves Balance
500(1)
17.3.2 Blocking and Stratifying Balances Prognostic Factors
501(2)
17.3.3 Other Considerations Regarding Blocking
503(1)
17.4 Adaptive Allocation
504(3)
17.4.1 Urn Designs Also Improve Balance
504(1)
17.4.2 Minimization Yields Tight Balance
504(1)
17.4.3 Play the Winner
505(2)
17.5 Other Issues Regarding Randomization
507(7)
17.5.1 Administration of the Randomization
507(1)
17.5.2 Computers Generate Pseudorandom Numbers
508(1)
17.5.3 Randomized Treatment Assignment Justifies Type I Errors
509(5)
17.6 Unequal Treatment Allocation
514(5)
17.6.1 Subsets May Be of Interest
514(1)
17.6.2 Treatments May Differ Greatly in Cost
515(1)
17.6.3 Variances May Be Different
515(1)
17.6.4 Multiarm Trials May Require Asymmetric Allocation
516(1)
17.6.5 Generalization
517(1)
17.6.6 Failed Randomization?
518(1)
17.7 Randomization Before Consent
519(1)
17.8 Summary
520(1)
17.9 Questions for Discussion
520(2)
18 Treatment Effects Monitoring
522(51)
18.1 Introduction
522(5)
18.1.1 Motives for Monitoring
523(1)
18.1.2 Components of Responsible Monitoring
524(1)
18.1.3 Trials Can Be Stopped for a Variety of Reasons
524(2)
18.1.4 There Is Tension in the Decision to Stop
526(1)
18.2 Administrative Issues in Trial Monitoring
527(10)
18.2.1 Monitoring of Single-Center Studies Relies on Periodic Investigator Reporting
527(1)
18.2.2 Composition and Organization of the TEMC
528(7)
18.2.3 Complete Objectivity Is Not Ethical
535(2)
18.2.4 Independent Experts in Monitoring
537(1)
18.3 Organizational Issues Related to Monitoring
537(8)
18.3.1 Initial TEMC Meeting
538(1)
18.3.2 The TEMC Assesses Baseline Comparability
538(1)
18.3.3 The TEMC Reviews Accrual and Expected Time to Study Completion
539(1)
18.3.4 Timeliness of Data and Reporting Lags
539(1)
18.3.5 Data Quality Is a Major Focus of the TEMC
540(1)
18.3.6 The TEMC Reviews Safety and Toxicity Data
541(1)
18.3.7 Efficacy Differences Are Assessed by the TEMC
541(1)
18.3.8 The TEMC Should Address Some Practical Questions Specifically
541(3)
18.3.9 The TEMC Mechanism Has Potential Weaknesses
544(1)
18.4 Statistical Methods for Monitoring
545(25)
18.4.1 There Are Several Approaches to Evaluating Incomplete Evidence
545(2)
18.4.2 Monitoring Developmental Trials for Risk
547(4)
18.4.3 Likelihood-Based Methods
551(6)
18.4.4 Bayesian Methods
557(2)
18.4.5 Decision-Theoretic Methods
559(1)
18.4.6 Frequentist Methods
560(6)
18.4.7 Other Monitoring Tools
566(4)
18.4.8 Some Software
570(1)
18.5 Summary
570(2)
18.6 Questions for Discussion
572(1)
19 Counting Subjects and Events
573(17)
19.1 Introduction
573(1)
19.2 Imperfection and Validity
574(1)
19.3 Treatment Nonadherence
575(5)
19.3.1 Intention to Treat Is a Policy of Inclusion
575(1)
19.3.2 Coronary Drug Project Results Illustrate the Pitfalls of Exclusions Based on Nonadherence
576(1)
19.3.3 Statistical Studies Support the ITT Approach
577(1)
19.3.4 Trials Are Tests of Treatment Policy
577(1)
19.3.5 ITT Analyses Cannot Always Be Applied
578(1)
19.3.6 Trial Inferences Depend on the Experiment Design
579(1)
19.4 Protocol Nonadherence
580(3)
19.4.1 Eligibility
580(1)
19.4.2 Treatment
581(1)
19.4.3 Defects in Retrospect
582(1)
19.5 Data Imperfections
583(5)
19.5.1 Evaluability Criteria Are a Methodologic Error
583(1)
19.5.2 Statistical Methods Can Cope with Some Types of Missing Data
584(4)
19.6 Summary
588(1)
19.7 Questions for Discussion
589(1)
20 Estimating Clinical Effects
590(54)
20.1 Introduction
590(4)
20.1.1 Invisibility Works Against Validity
591(1)
20.1.2 Structure Aids Internal and External Validity
591(1)
20.1.3 Estimates of Risk Are Natural and Useful
592(2)
20.2 Dose-Finding and Pharmacokinetic Trials
594(5)
20.2.1 Pharmacokinetic Models Are Essential for Analyzing DF Trials
594(1)
20.2.2 A Two-Compartment Model Is Simple but Realistic
595(3)
20.2.3 PK Models Are Used By "Model Fitting"
598(1)
20.3 Middle Development Studies
599(7)
20.3.1 Mesothelioma Clinical Trial Example
599(1)
20.3.2 Summarize Risk for Dichotomous Factors
600(1)
20.3.3 Nonparametric Estimates of Survival Are Robust
601(2)
20.3.4 Parametric (Exponential) Summaries of Survival Are Efficient
603(2)
20.3.5 Percent Change and Waterfall Plots
605(1)
20.4 Randomized Comparative Trials
606(10)
20.4.1 Examples of Comparative Trials Used in This Section
607(1)
20.4.2 Continuous Measures Estimate Treatment Differences
608(1)
20.4.3 Baseline Measurements Can Increase Precision
609(1)
20.4.4 Comparing Counts
610(2)
20.4.5 Nonparametric Survival Comparisons
612(2)
20.4.6 Risk (Hazard) Ratios and Confidence Intervals Are Clinically Useful Data Summaries
614(1)
20.4.7 Statistical Models Are Necessary Tools
615(1)
20.5 Problems With P-Values
616(4)
20.5.1 P-Values Do Not Represent Treatment Effects
618(1)
20.5.2 P-Values Do Not Imply Reproducibility
618(1)
20.5.3 P-Values Do Not Measure Evidence
619(1)
20.6 Strength of Evidence Through Support Intervals
620(2)
20.6.1 Support Intervals Are Based on the Likelihood Function
620(1)
20.6.2 Support Intervals Can Be Used with Any Outcome
621(1)
20.7 Special Methods of Analysis
622(6)
20.7.1 The Bootstrap Is Based on Resampling
623(1)
20.7.2 Some Clinical Questions Require Other Special Methods of Analysis
623(5)
20.8 Exploratory Analyses
628(11)
20.8.1 Clinical Trial Data Lend Themselves to Exploratory Analyses
628(1)
20.8.2 Multiple Tests Multiply Type I Errors
629(1)
20.8.3 Kinds of Multiplicity
630(1)
20.8.4 Inevitible Risks from Subgroups
630(2)
20.8.5 Tale of a Subset Analysis Gone Wrong
632(3)
20.8.6 Perspective on Subgroup Analyses
635(1)
20.8.7 Effects the Trial Was Not Designed to Detect
636(1)
20.8.8 Safety Signals
637(1)
20.8.9 Subsets
637(1)
20.8.10 Interactions
638(1)
20.9 Summary
639(1)
20.10 Questions for Discussion
640(4)
21 Prognostic Factor Analyses
644(27)
21.1 Introduction
644(3)
21.1.1 Studying Prognostic Factors is Broadly Useful
645(1)
21.1.2 Prognostic Factors Can Be Constant or Time-Varying
646(1)
21.2 Model-Based Methods
647(14)
21.2.1 Models Combine Theory and Data
647(1)
21.2.2 Scale and Coding May Be Important
648(1)
21.2.3 Use Flexible Covariate Models
648(2)
21.2.4 Building Parsimonious Models Is the Next Step
650(5)
21.2.5 Incompletely Specified Models May Yield Biased Estimates
655(1)
21.2.6 Study Second-Order Effects (Interactions)
656(1)
21.2.7 PFAs Can Help Describe Risk Groups
656(4)
21.2.8 Power and Sample Size for PFAs
660(1)
21.3 Adjusted Analyses of Comparative Trials
661(5)
21.3.1 What Should We Adjust For?
662(1)
21.3.2 What Can Happen?
663(1)
21.3.3 Brain Tumor Case Study
664(2)
21.4 PFAs Without Models
666(3)
21.4.1 Recursive Partitioning Uses Dichotomies
666(1)
21.4.2 Neural Networks Are Used for Pattern Recognition
667(2)
21.5 Summary
669(1)
21.6 Questions for Discussion
669(2)
22 Factorial Designs
671(13)
22.1 Introduction
671(1)
22.2 Characteristics of Factorial Designs
672(3)
22.2.1 Interactions or Efficiency, But Not Both Simultaneously
672(1)
22.2.2 Factorial Designs Are Defined by Their Structure
672(2)
22.2.3 Factorial Designs Can Be Made Efficient
674(1)
22.3 Treatment Interactions
675(5)
22.3.1 Factorial Designs Are the Only Way to Study Interactions
675(2)
22.3.2 Interactions Depend on the Scale of Measurement
677(1)
22.3.3 The Interpretation of Main Effects Depends on Interactions
677(1)
22.3.4 Analyses Can Employ Linear Models
678(2)
22.4 Examples of Factorial Designs
680(2)
22.5 Partial, Fractional, and Incomplete Factorials
682(1)
22.5.1 Use Partial Factorial Designs When Interactions Are Absent
682(1)
22.5.2 Incomplete Designs Present Special Problems
682(1)
22.6 Summary
683(1)
22.7 Questions for Discussion
683(1)
23 Crossover Designs
684(14)
23.1 Introduction
684(2)
23.1.1 Other Ways of Giving Multiple Treatments Are Not Crossovers
685(1)
23.1.2 Treatment Periods May Be Randomly Assigned
686(1)
23.2 Advantages and Disadvantages
686(5)
23.2.1 Crossover Designs Can Increase Precision
687(1)
23.2.2 A Crossover Design Might Improve Recruitment
687(1)
23.2.3 Carryover Effects Are a Potential Problem
688(1)
23.2.4 Dropouts Have Strong Effects
689(1)
23.2.5 Analysis is More Complex Than for a Parallel-Group Design
689(1)
23.2.6 Prerequisites Are Needed to Apply Crossover Designs
689(1)
23.2.7 Other Uses for the Design
690(1)
23.3 Analysis
691(5)
23.3.1 Simple Approaches
691(1)
23.3.2 Analysis Can Be Based on a Cell Means Model
692(4)
23.3.3 Other Issues in Analysis
696(1)
23.4 Classic Case Study
696(1)
23.5 Summary
696(1)
23.6 Questions for Discussion
697(1)
24 Meta-Analyses
698(11)
24.1 Introduction
698(2)
24.1.1 Meta-Analyses Formalize Synthesis and Increase Precision
699(1)
24.2 A Sketch of Meta-Analysis Methods
700(5)
24.2.1 Meta-Analysis Necessitates Prerequisites
700(1)
24.2.2 Many Studies Are Potentially Relevant
701(1)
24.2.3 Select Studies
702(1)
24.2.4 Plan the Statistical Analysis
703(1)
24.2.5 Summarize the Data Using Observed and Expected
703(2)
24.3 Other Issues
705(2)
24.3.1 Cumulative Meta-Analyses
705(1)
24.3.2 Meta-Analyses Have Practical and Theoretical Limitations
706(1)
24.3.3 Meta-Analysis Has Taught Useful Lessons
707(1)
24.4 Summary
707(1)
24.5 Questions for Discussion
708(1)
25 Reporting and Authorship
709(25)
25.1 Introduction
709(1)
25.2 General Issues in Reporting
710(5)
25.2.1 Uniformity Improves Comprehension
711(1)
25.2.2 Quality of the Literature
712(1)
25.2.3 Peer Review Is the Only Game in Town
712(1)
25.2.4 Publication Bias Can Distort Impressions Based on the Literature
713(2)
25.3 Clinical Trial Reports
715(11)
25.3.1 General Considerations
716(5)
25.3.2 Employ a Complete Outline for Comparative Trial Reporting
721(5)
25.4 Authorship
726(5)
25.4.1 Inclusion and Ordering
727(1)
25.4.2 Responsibility of Authorship
727(1)
25.4.3 Authorship Models
728(2)
25.4.4 Some Other Practicalities
730(1)
25.5 Other Issues in Disseminating Results
731(1)
25.5.1 Open Access
731(1)
25.5.2 Clinical Alerts
731(1)
25.5.3 Retractions
732(1)
25.6 Summary
732(1)
25.7 Questions for Discussion
733(1)
26 Misconduct and Fraud in Clinical Research
734(27)
26.1 Introduction
734(7)
26.1.1 Integrity and Accountability Are Critically Important
736(2)
26.1.2 Fraud and Misconduct Are Difficult to Define
738(3)
26.2 Research Practices
741(2)
26.2.1 Misconduct May Be Increasing in Frequency
741(1)
26.2.2 Causes of Misconduct
742(1)
26.3 Approach to Allegations of Misconduct
743(4)
26.3.1 Institutions
744(2)
26.3.2 Problem Areas
746(1)
26.4 Characteristics of Some Misconduct Cases
747(7)
26.4.1 Darsee Case
747(2)
26.4.2 Poisson (NSABP) Case
749(3)
26.4.3 Two Recent Cases from Germany
752(1)
26.4.4 Fiddes Case
753(1)
26.4.5 Potti Case
754(1)
26.5 Lessons
754(3)
26.5.1 Recognizing Fraud or Misconduct
754(2)
26.5.2 Misconduct Cases Yield Other Lessons
756(1)
26.6 Clinical Investigators' Responsibilities
757(2)
26.6.1 General Responsibilities
757(1)
26.6.2 Additional Responsibilities Related to INDs
758(1)
26.6.3 Sponsor Responsibilities
759(1)
26.7 Summary
759(1)
26.8 Questions for Discussion
760(1)
Appendix A Data and Programs
761(3)
A.1 Introduction
761(1)
A.2 Design Programs
761(2)
A.2.1 Power and Sample Size Program
761(2)
A.2.2 Blocked Stratified Randomization
763(1)
A.2.3 Continual Reassessment Method
763(1)
A.2.4 Envelope Simulation
763(1)
A.3 Mathematica Code
763(1)
Appendix B Abbreviations
764(5)
Appendix C Notation and Terminology
769(19)
C.1 Introduction
769(1)
C.2 Notation
769(3)
C.2.1 Greek Letters
770(1)
C.2.2 Roman Letters
771(1)
C.2.3 Other Symbols
772(1)
C.3 Terminology and Concepts
772(16)
Appendix D Nuremberg Code
788(2)
D. 1 Permissible Medical Experiments
788(2)
References 790(81)
Index 871
Steven Piantadosi, MD, PhD, is the Phase One Foundation Distinguished Chair and Director of the Samuel Oschin Cancer Institute, and Professor of Medicine at Cedars-Sinai Medical Center in Los Angeles, California. Dr. Piantadosi is one of the world's leading experts in the design and analysis of clinical trials for cancer research. He has taught clinical trial methods extensively in formal courses and short venues. He has advised numerous academic programs and collaborations nationally regarding clinical trial design and conduct, and has served on external advisory boards for the National Institutes of Health and other prominent cancer programs and centers. The author of more than 260 peer-reviewed scientific articles, Dr. Piantadosi has published extensively on research results, clinical applications, and trial methodology. While his papers have contributed to many areas of oncology, he has also collaborated on diverse studies outside oncology including lung disease and degenerative neurological disease.