Muutke küpsiste eelistusi

E-raamat: Test-Driven Development: An Empirical Evaluation of Agile Practice

  • Formaat: PDF+DRM
  • Ilmumisaeg: 05-Dec-2009
  • Kirjastus: Springer-Verlag Berlin and Heidelberg GmbH & Co. K
  • Keel: eng
  • ISBN-13: 9783642042881
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 55,56 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 05-Dec-2009
  • Kirjastus: Springer-Verlag Berlin and Heidelberg GmbH & Co. K
  • Keel: eng
  • ISBN-13: 9783642042881
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Agile methods are gaining more and more interest both in industry and in research. Many industries are transforming their way of working from traditional waterfall projects with long duration to more incremental, iterative and agile practices. At the same time, the need to evaluate and to obtain evidence for different processes, methods and tools has been emphasized.

Lech Madeyski offers the first in-depth evaluation of agile methods. He presents in detail the results of three different experiments, including concrete examples of how to conduct statistical analysis with meta analysis or the SPSS package, using as evaluation indicators the number of acceptance tests passed (overall and per hour) and design complexity metrics.

The book is appropriate for graduate students, researchers and advanced professionals in software engineering. It proves the real benefits of agile software development, provides readers with in-depth insights into experimental methods in the context of agile development, and discusses various validity threats in empirical studies.



This book offers an in-depth evaluation of agile methods. It proves the benefits of agile software development, provides in-depth insights into experimental methods in the context of agile development and details various validity threats in empirical studies.

Arvustused

"This work is really geared toward researchers, software architects, and engineering and QA professionals who are looking for relevant metrics to measure the feasibility of the agile implementation at their respective organizations." - from the review by Joel R. Singh on www.stickyminds.com

"In summary the book provides many valuable insights both to practitioners in terms of the evidence for test-first programming and to researchers in terms of clear illustrations of how new processes, methods and tools can be evaluated using experimentation in software engineering. It is pleasure to recommend this book to practitioners and researchers being interested in agile methods or empirical evaluation or both of them." from the Foreword by Claes Wohlin; Blekinge Institute of Technology, Sweden

"This book makes a big step forward in a scientific approach to software engineering in general, and agile practices in particular. I am a practitioner and this is one of the very few books I saw that are in line with my gut feeling and day-to-day experience with Test-Driven Development and code quality. I believe this book will also help stop some of us from blindly practising agile methods as voo-doo rituals and shed some light on the facts behind it."  Wojciech Biela; Agile Evangelist, Development Head; EMPiK.com and ExOrigo

"[ ...] The author helps the reader appreciate the rationale behind the use of scientific research methods in software engineering. The strength of the book lies in the appreciation of empirical software engineering methods. A few of the books strengths are: its explanation of the research methods through modeling processes in unified modeling language (UML), the validation ofmodels through hypotheses, the use of quantitative methods available through SPSS software, and the presentation of the examples through an incremental approach. [ ...] I recommend the book to research scholars who plan to conduct multidisciplinary research in software engineering." - ACM Computing Reviews, Harekrishna Misra, January 2011

Introduction
1(14)
Test-First Programming
1(3)
Mechanisms Behind Test-First Programming that Motivate Research
2(2)
Research Methodology
4(4)
Empirical Software Engineering
4(1)
Empirical Methods
5(3)
Software Measurement
8(5)
Measurement Levels
8(1)
Software Product Quality
9(3)
Software Development Productivity
12(1)
Research Questions
13(1)
Book Organization
13(1)
Claimed Contributions
14(1)
Related Work in Industrial and Academic Environments
15(10)
Test-First Programming
15(5)
Pair Programming
20(3)
Summary
23(2)
Research Goals, Conceptual Model and Variables Selection
25(14)
Goals Definition
25(1)
Conceptual Model
26(2)
Variables Selection
28(11)
Independent Variable (IV)
28(4)
Dependent Variables (DVs) --- From Goals to Dependent Variables
32(4)
Confounding Variables
36(3)
Experiments Planning, Execution and Analysis Procedure
39(22)
Context Information
39(2)
Hypotheses
41(1)
Measurement Tools
42(2)
Aopmetrics
42(1)
ActivitySensor and SmartSensor Plugins
43(1)
Judy
43(1)
Experiment ACCOUNTING
44(2)
Goals
44(1)
Subjects
44(1)
Experimental Materials
44(1)
Experimental Task
45(1)
Hypotheses and Variables
45(1)
Design of the Experiment
45(1)
Experiment Operation
46(1)
Experiment SUBMISSION
46(4)
Goals
47(1)
Subjects
47(1)
Experimental Materials
47(1)
Experimental Task
48(1)
Hypotheses and Variables
48(1)
Design of the Experiment
48(1)
Experiment Operation
48(2)
Experiment SMELLS & LIBRARY
50(3)
Goals
50(1)
Subjects
50(1)
Experimental Materials
51(1)
Experimental Tasks
51(1)
Hypotheses and Variables
51(1)
Design of the Experiment
52(1)
Experiment Operation
52(1)
Analysis Procedure
53(8)
Descriptive Statistics
53(1)
Assumptions of Parametric Tests
53(1)
Carry-Over Effect
54(1)
Hypotheses Testing
55(1)
Effect Sizes
55(1)
Analysis of Covariance
56(1)
Process Conformance and Selective Analysis
57(3)
Combining Empirical Evidence
60(1)
Effect on the Percentage of Acceptance Tests Passed
61(66)
Analysis of Experiment ACCOUNTING
61(40)
Preliminary Analysis
61(24)
Selective Analysis
85(16)
Analysis of Experiment SUBMISSION
101(15)
Preliminary Analysis
101(9)
Selective Analysis
110(6)
Analysis of Experiment SMELLS&LIBRARY
116(9)
Preliminary Analysis
117(4)
Selective Analysis
121(4)
Instead of Summary
125(2)
Effect on the Number of Acceptance Tests Passed per Hour
127(14)
Analysis of Experiment ACCOUNTING
127(2)
Descriptive Statistics
128(1)
Non-Parametric Analysis
128(1)
Analysis of Experiment SUBMISSION
129(7)
Descriptive Statistics
129(2)
Assumption Testing
131(1)
Non-Parametric Analysis
131(5)
Analysis of Experiment SMELLS&LIBRARY
136(4)
Descriptive Statistics
136(2)
Assumption Testing
138(1)
Non-Parametric Analysis
139(1)
Instead of Summary
140(1)
Effect on Internal Quality Indicators
141(18)
Confounding Effect of Class Size on the Validity of Object-Oriented Metrics
141(1)
Analysis of Experiment ACCOUNTING
142(5)
Descriptive Statistics
142(3)
Assumption Testing
145(1)
Mann-Whitney Tests
145(2)
Analysis of Experiment SUBMISSION
147(5)
Descriptive Statistics
147(3)
Assumption Testing
150(1)
Independent t-Test
150(2)
Analysis of Experiment SMELLS&LIBRARY
152(6)
Descriptive Statistics
152(1)
Assumption Testing
153(2)
Dependent t-Test
155(3)
Instead of Summary
158(1)
Effects on Unit Tests - Preliminary Analysis
159(6)
Analysis of Experiment SUBMISSION
160(5)
Descriptive Statistics
160(2)
Assumption Testing
162(1)
Mann-Whitney Test
163(2)
Meta-Analysis
165(32)
Introduction to Meta-Analysis
166(5)
Combining p-Values Across Experiments
166(1)
Combining Effect Sizes Across Experiments
167(4)
Preliminary Meta-Analysis
171(13)
Combining Effects on the Percentage of Acceptance Tests Passed (PATP)
171(4)
Combining Effects on the Number of Acceptance Tests Passed Per Development Hour (NATPPH)
175(2)
Combining Effects on Design Complexity
177(7)
Selective Meta-Analysis
184(13)
Combining Effects on the Percentage of Acceptance Tests Passed (PATP)
185(2)
Combining Effects on the Number of Acceptance Tests Passed Per Hour (NATPPH)
187(1)
Combining Effects on Design Complexity
188(9)
Discussion, Conclusions and Future Work
197(22)
Overview of Results
197(3)
Rules of Thumb for Industry Practitioners
200(2)
Explaining Plausible Mechanisms Behind the Results
202(3)
Contributions
205(1)
Threats to Validity
206(11)
Statistical Conclusion Validity
206(3)
Internal Validity
209(2)
Construct Validity
211(2)
External Validity
213(3)
Threats to Validity of Meta-Analysis
216(1)
Conclusions and Future Work
217(2)
Appendix 219(4)
Glossary 223(4)
References 227(16)
Index 243
Lech Madeyski is Assistant Professor in the Software Engineering Department, Institute of Informatics, Wroclaw University of Technology, Poland. His current research interests include: experimentation in software engineering, software metrics and models, software quality and testing, software products and process improvement, and agile software development methodologies (e.g., eXtreme Programming).

He has published research papers in refereed software engineering journals (e.g., IET Software, Journal of Software Process: Improvement and Practice) and conferences (e.g., PROFES, XP, EuroSPI, CEE-SET). He has been a member of the program, steering, or organization committee for several software engineering conferences such as PROFES (International Conference on Product Focused Software Process Improvement), ENASE (International Working Conference on Evaluation of Novel Approaches to Software Engineering), CEE-SET (Central and East-European Conference on Software Engineering Techniques), and BPSC (International Working Conference on Business Process and Services Computing).

His paper at PROFES 2007 received the Best Paper Award.