Muutke küpsiste eelistusi

E-raamat: Software Metrics: A Rigorous and Practical Approach, Third Edition

(School of Electronic Engineering and Computer Science, Queen Mary University of London, UK), (Colorado State University, Fort Collins, USA)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 51,99 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"This book provides an up-to-date and rigorous framework for controlling, managing, and predicting software development processes. Emphasizing real-world applications, the authors apply basic ideas in measurement theory to quantify software development resources, processes, and products. The text offers an accessible and comprehensive introduction to software metrics. It features extensive case studies in addition to worked examples and exercises. This new edition covers current research and practical applications of cost estimation methods in practice"--

Fenton and Bieman present a primary textbook for an academic or professional course of software metrics and quality assurance, but note that it could also serve as a supplement for any course in software engineering and as a broad reference to practitioners. They examine and explain the fundamentals of software measurement, experimentation, and collecting and analyzing data then detail a range of specific metrics and their uses. The topics include a goal-based framework for software measurement, empirical investigation, measuring internal product attributes, and measuring and predicting software reliability. Annotation ©2015 Ringgold, Inc., Portland, OR (protoview.com)

A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and Processes
Reflecting the immense progress in the development and use of software metrics in the past decades,Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.

New to the Third Edition
This edition contains new material relevant to object-oriented design, design patterns, model-driven development, and agile development processes. It includes a new chapter on causal models and Bayesian networks and their application to software engineering. This edition also incorporates recent references to the latest software metrics activities, including research results, industrial case studies, and standards.

Suitable for a Range of Readers
With numerous examples and exercises, this book continues to serve a wide audience. It can be used as a textbook for a software metrics and quality assurance course or as a useful supplement in any software engineering course. Practitioners will appreciate the important results that have previously only appeared in research-oriented publications. Researchers will welcome the material on new results as well as the extensive bibliography of measurement-related information. The book also gives software managers and developers practical guidelines for selecting metrics and planning their use in a measurement program.

Arvustused

"The wait for a new edition of this book is over. Long considered the go-to text for its thorough coverage of software measurement and experimentation, the new edition succeeds splendidly in bringing the field up to date while including new and important topics. updated with the latest results from recent advances in software measurement research and practice. The authors do an outstanding job of balancing formal analysis topics with examples that ground the reader in practical application. Both researchers and practitioners alike will gain a valuable understanding of why measurement is critical for quality improvements in software development processes and software products. With this updated edition, this book solidifies its standing as the most complete reference text for software measurement." Computing Review, April 2015

"I have been using this book as my primary reference on software metrics for over 20 years now. It still remains the best book by far on the science and practice of software metrics. This latest edition has some important updates, especially with the inclusion of material on Bayesian networks for prediction and risk assessment." Paul Krause, University of Surrey, Guildford, UK

"Great introduction to software metrics, measurement, and experimentation. This will be a must-read for my software engineering students." Lukasz Radlinski, PhD, West Pomeranian University of Technology, Szczecin, Poland

"I have loved this book from the first edition and with each new edition it just keeps getting better and better. I use this book constantly in my software engineering research and always recommend it to students. It is so much more than a software metrics book; to me it is an essential companion to rigorous empirical software engineering." Dr. Tracy Hall, Department of Computer Science, Brunel University, Uxbridge, UK

"This new edition of Software Metrics succeeds admirably in bringing the field of software measurement up to date and in delivering a wider range of topics to its readers as compared to its previous edition. I have both reviewed and used the book in my software measurement courses and find it to be one of the most advanced and well structured on the market today, tailored for training software engineers in both theoretical and practical aspects of software measurement. I look forward to continuing the use of the book for teaching purposes and am very comfortable offering my recommendation for this book as a primary textbook for graduate or undergraduate courses on software measurement. Thank you again for providing such a quality book to our software engineering education programs." Olga Ormandjieva, Associate Professor, Department of Computer Science and Software Engineering, Concordia University, Canada

"This book lucidly and diligently covers the nuts and bolts of software measurement. It is an excellent reference on software metric fundamentals, suitable as a comprehensive textbook for software engineering students and as a definitive manual for industry practitioners." Mohammad Alshayeb, Associate Professor of Software Engineering, King Fahd University of Petroleum and Minerals

Preface xvii
Acknowledgments xix
Authors xxi
PART I Fundamentals of Measurement and Experimentation
Chapter 1 Measurement: What Is It and Why Do It?
3(22)
1.1 Measurement In Everyday Life
4(7)
1.1.1 What Is Measurement?
4(3)
1.1.2 "What Is Not Measurable Make Measurable"
7(4)
1.2 Measurement In Software Engineering
11(6)
1.2.1 Neglect of Measurement in Software Engineering
12(2)
1.2.2 Objectives for Software Measurement
14(1)
1.2.2.1 Managers
14(1)
1.2.2.2 Developers
15(1)
1.2.3 Measurement for Understanding, Control, and Improvement
16(1)
1.3 Scope Of Software Metrics
17(5)
1.3.1 Cost and Effort Estimation
18(1)
1.3.2 Data Collection
18(1)
1.3.3 Quality Models and Measures
19(1)
1.3.4 Reliability Models
19(1)
1.3.5 Security Metrics
20(1)
1.3.6 Structural and Complexity Metrics
20(1)
1.3.7 Capability Maturity Assessment
20(1)
1.3.8 Management by Metrics
21(1)
1.3.9 Evaluation of Methods and Tools
21(1)
1.4 Summary
22(3)
Exercises
22(3)
Chapter 2 The Basics of Measurement
25(62)
2.1 The Representational Theory Of Measurement
26(14)
2.1.1 Empirical Relations
27(5)
2.1.2 The Rules of the Mapping
32(1)
2.1.3 The Representation Condition of Measurement
33(7)
2.2 Measurement And Models
40(11)
2.2.1 Defining Attributes
42(2)
2.2.2 Direct and Derived Measurement
44(3)
2.2.3 Measurement for Prediction
47(4)
2.3 Measurement Scales And Scale Types
51(10)
2.3.1 Nominal Scale Type
53(1)
2.3.2 Ordinal Scale Type
54(2)
2.3.3 Interval Scale Type
56(2)
2.3.4 Ratio Scale Type
58(1)
2.3.5 Absolute Scale Type
59(2)
2.4 Meaningfulness In Measurement
61(17)
2.4.1 Statistical Operations on Measures
65(3)
2.4.2 Objective and Subjective Measures
68(2)
2.4.3 Measurement in Extended Number Systems
70(5)
2.4.4 Derived Measurement and Meaningfulness
75(3)
2.5 Summary
78(9)
Exercises
79(5)
References
84(1)
Further Reading
85(2)
Chapter 3 A Goal-Based Framework for Software Measurement
87(46)
3.1 Classifying Software Measures
87(13)
3.1.1 Processes
91(1)
3.1.2 Products
92(1)
3.1.2.1 External Product Attributes
93(1)
3.1.2.2 Internal Product Attributes
93(2)
3.1.2.3 The Importance of Internal Attributes
95(1)
3.1.2.4 Internal Attributes and Quality Control and Assurance
96(1)
3.1.2.5 Validating Composite Measures
96(2)
3.1.3 Resources
98(1)
3.1.4 Change and Evolution
99(1)
3.2 Determining What To Measure
100(10)
3.2.1 Goal-Question-Metric Paradigm
100(5)
3.2.2 Measurement for Process Improvement
105(3)
3.2.3 Combining GQM with Process Maturity
108(2)
3.3 Applying The Framework
110(7)
3.3.1 Cost and Effort Estimation
110(2)
3.3.2 Productivity Measures and Models
112(1)
3.3.3 Data Collection
112(1)
3.3.4 Quality Models and Measures
113(1)
3.3.5 Reliability Models
113(1)
3.3.6 Structural and Complexity Metrics
114(1)
3.3.7 Management by Metrics
115(1)
3.3.8 Evaluation of Methods and Tools
116(1)
3.4 Software Measurement Validation
117(4)
3.4.1 Validating Prediction Systems
117(2)
3.4.2 Validating Measures
119(1)
3.4.3 A Mathematical Perspective of Metric Validation
120(1)
3.5 Performing Software Measurement Validation
121(6)
3.5.1 A More Stringent Requirement for Validation
122(2)
3.5.2 Validation and Imprecise Definition
124(1)
3.5.3 How Not to Validate
125(1)
3.5.4 Choosing Appropriate Prediction Systems
126(1)
3.6 Summary
127(6)
Exercises
129(2)
Further Reading
131(2)
Chapter 4 Empirical Investigation
133(50)
4.1 Principles Of Empirical Studies
134(11)
4.1.1 Control of Variables and Study Type
135(4)
4.1.2 Study Goals and Hypotheses
139(2)
4.1.3 Maintaining Control over Variables
141(2)
4.1.4 Threats to Validity
143(1)
4.1.5 Human Subjects
144(1)
4.2 Planning Experiments
145(26)
4.2.1 A Process Model for Performing Experiments
145(1)
4.2.1.1 Conception
146(1)
4.2.1.2 Design
146(4)
4.2.1.3 Preparation
150(1)
4.2.1.4 Execution
150(1)
4.2.1.5 Analysis
150(1)
4.2.1.6 Dissemination and Decision-Making
150(1)
4.2.2 Key Experimental Design Concepts
151(2)
4.2.2.1 Replication
153(1)
4.2.2.2 Randomization
154(1)
4.2.2.3 Local Control
155(2)
4.2.3 Types of Experimental Designs
157(2)
4.2.3.1 Crossing
159(1)
4.2.3.2 Nesting
159(2)
4.2.4 Selecting an Experimental Design
161(1)
4.2.4.1 Choosing the Number of Factors
162(2)
4.2.4.2 Factors versus Blocks
164(1)
4.2.4.3 Choosing between Nested and Crossed Designs
165(4)
4.2.4.4 Fixed and Random Effects
169(1)
4.2.4.5 Matched- or Same-Subject Designs
170(1)
4.2.4.6 Repeated Measurements
170(1)
4.3 Planning Case Studies As Quasi-Experiments
171(2)
4.3.1 Sister Projects
172(1)
4.3.2 Baselines
172(1)
4.3.3 Partitioned Project
173(1)
4.3.4 Retrospective Case Study
173(1)
4.4 Relevant And Meaningful Studies
173(6)
4.4.1 Confirming Theories and "Conventional Wisdom"
174(1)
4.4.2 Exploring Relationships
175(2)
4.4.3 Evaluating the Accuracy of Prediction Models
177(1)
4.4.4 Validating Measures
178(1)
4.5 Summary
179(4)
Exercises
180(1)
Further Reading
181(1)
References
182(1)
Chapter 5 Software Metrics Data Collection
183(42)
5.1 Defining Good Data
184(1)
5.2 Data Collection For Incident Reports
185(19)
5.2.1 The Problem with Problems
186(5)
5.2.2 Failures
191(6)
5.2.3 Faults
197(6)
5.2.4 Changes
203(1)
5.3 How To Collect Data
204(9)
5.3.1 Data Collection Forms
208(3)
5.3.2 Data Collection Tools
211(2)
5.4 Reliability Of Data Collection Procedures
213(1)
5.5 Summary
214(11)
Exercises
216(6)
References
222(1)
Further Reading
222(3)
Chapter 6 Analyzing Software Measurement Data
225(66)
6.1 Statistical Distributions And Hypothesis Testing
226(6)
6.1.1 Probability Distributions
226(5)
6.1.2 Hypothesis Testing Approaches
231(1)
6.2 Classical Data Analysis Techniques
232(11)
6.2.1 Nature of the Data
233(1)
6.2.1.1 Sampling, Population, and Data Distribution
233(3)
6.2.1.2 Distribution of Software Measurements
236(3)
6.2.1.3 Statistical Inference and Classical Hypothesis Testing
239(2)
6.2.2 Purpose of the Experiment
241(1)
6.2.2.1 Confirming a Theory
241(1)
6.2.2.2 Exploring a Relationship
242(1)
6.2.3 Decision Tree
243(1)
6.3 Examples Of Simple Analysis Techniques
243(16)
6.3.1 Box Plots
243(4)
6.3.2 Bar Charts
247(1)
6.3.3 Control Charts
248(2)
6.3.4 Scatter Plots
250(2)
6.3.5 Measures of Association
252(1)
6.3.6 Robust Correlation
253(2)
6.3.7 Linear Regression
255(2)
6.3.8 Robust Regression
257(2)
6.3.9 Multivariate Regression
259(1)
6.4 More Advanced Methods
259(8)
6.4.1 Classification Tree Analysis
259(2)
6.4.2 Transformations
261(3)
6.4.3 Multivariate Data Analysis
264(1)
6.4.3.1 Principal Component Analysis
264(3)
6.4.3.2 Cluster Analysis
267(1)
6.4.3.3 Discriminant Analysis
267(1)
6.5 Multicriteria Decision Aids
267(12)
6.5.1 Basic Concepts of Multicriteria Decision-Making
268(6)
6.5.2 Multiattribute Utility Theory
274(2)
6.5.3 Outranking Methods
276(2)
6.5.4 Bayesian Evaluation of Multiple Hypotheses
278(1)
6.6 Overview Of Statistical Tests
279(5)
6.6.1 One-Group Tests
279(1)
6.6.1.1 Binomial Test
280(1)
6.6.1.2 Chi-Squared Test for Goodness of Fit
280(1)
6.6.1.3 Kolmogorov--Smirnov One-Sample Test
281(1)
6.6.1.4 One-Sample Runs Test
281(1)
6.6.1.5 Change-Point Test
281(1)
6.6.2 Two-Group Tests
281(1)
6.6.2.1 Tests to Compare Two Matched or Related Groups
282(1)
6.6.2.2 Tests to Compare Two Independent Groups
283(1)
6.6.3 Comparisons Involving More than Two Groups
283(1)
6.7 Summary
284(7)
Exercises
285(4)
Reference
289(1)
Further Reading
289(2)
Chapter 7 Metrics for Decision Support: The Need for Causal Models
291(44)
7.1 From Correlation And Regression To Causal Models
293(8)
7.2 Bayes Theorem And Bayesian Networks
301(5)
7.3 Applying Bayesian Networks To The Problem Of Software Defects Prediction
306(14)
7.3.1 A Very Simple BN for Understanding Defect Prediction
307(3)
7.3.2 A Full Model for Software Defects and Reliability Prediction
310(4)
7.3.3 Commercial Scale Versions of the Defect Prediction Models
314(6)
7.4 Bayesian Networks For Software Project Risk Assessment And Prediction
320(8)
7.5 Summary
328(7)
Exercises
329(1)
Further Reading
330(5)
PART II Software Engineering Measurement
Chapter 8 Measuring Internal Product Attributes: Size
335(36)
8.1 Properties Of Software Size
336(3)
8.2 Code Size
339(9)
8.2.1 Counting Lines of Code to Measure Code Size
339(5)
8.2.2 Halstead's Approach
344(2)
8.2.3 Alternative Code Size Measures
346(1)
8.2.4 Dealing with Nontextual or External Code
347(1)
8.3 Design Size
348(2)
8.4 Requirements Analysis And Specification Size
350(1)
8.5 Functional Size Measures And Estimators
351(9)
8.5.1 Function Points
352(3)
8.5.1.1 Function Points for Object-Oriented Software
355(1)
8.5.1.2 Function Point Limitations
356(2)
8.5.2 Cocomo II Approach
358(2)
8.6 Applications Of Size Measures
360(4)
8.6.1 Using Size to Normalize Other Measurements
360(1)
8.6.2 Size-Based Reuse Measurement
361(2)
8.6.3 Size-Based Software Testing Measurement
363(1)
8.7 Problem, Solution Size, Computational Complexity
364(1)
8.8 Summary
365(6)
Exercises
366(2)
Further Reading
368(3)
Chapter 9 Measuring Internal Product Attributes: Structure
371(62)
9.1 Aspects Of Structural Measures
372(4)
9.1.1 Structural Complexity Properties
373(1)
9.1.2 Length Properties
373(1)
9.1.3 Coupling Properties
374(1)
9.1.4 Cohesion Properties
375(1)
9.1.5 Properties of Custom Attributes
375(1)
9.2 Control Flow Structure Of Program Units
376(26)
9.2.1 Flowgraph Model and the Notion of Structured Programs
377(4)
9.2.1.1 Sequencing and Nesting
381(3)
9.2.1.2 Generalized Notion of Structuredness
384(2)
9.2.1.3 Prime Decomposition
386(2)
9.2.2 Hierarchical Measures
388(3)
9.2.2.1 McCabe's Cyclomatic Complexity Measure
391(2)
9.2.2.2 McCabe's Essential Complexity Measure
393(1)
9.2.3 Code Structure and Test Coverage Measures
394(5)
9.2.3.1 Minimum Number of Test Cases
399(2)
9.2.3.2 Test Effectiveness Ratio
401(1)
9.3 Design-Level Attributes
402(12)
9.3.1 Models of Modularity and Information Flow
402(2)
9.3.2 Global Modularity
404(1)
9.3.3 Morphology
405(1)
9.3.4 Tree Impurity
406(3)
9.3.5 Internal Reuse
409(1)
9.3.6 Information Flow
410(2)
9.3.7 Information Flow: Test Coverage Measures
412(2)
9.4 Object-Oriented Structural Attributes And Measures
414(11)
9.4.1 Measuring Coupling in Object-Oriented Systems
416(2)
9.4.2 Measuring Cohesion in Object-Oriented Systems
418(3)
9.4.3 Object-Oriented Length Measures
421(1)
9.4.4 Object-Oriented Reuse Measurement
422(1)
9.4.5 Design Pattern Use
423(2)
9.5 No Single Overall "Software Complexity" Measure
425(3)
9.6 Summary
428(5)
Exercises
429(4)
Appendices To
Chapter 9
433(96)
A.1 McCabe's Testing Strategy
433(3)
A.1.1 Background
433(1)
A.1.2 The Strategy
434(2)
A.2 Computing Test Coverage Measures
436(5)
Further Reading
437(4)
Chapter 10 Measuring External Product Attributes
441(34)
10.1 Modeling Software Quality
442(7)
10.1.1 Early Models
443(4)
10.1.2 Define-Your-Own Models
447(1)
10.1.3 ISO/IEC 9126-1 and ISO/IEC 25010 Standard Quality Models
447(2)
10.2 Measuring Aspects Of Quality
449(7)
10.2.1 Defects-Based Quality Measures
450(1)
10.2.1.1 Defect Density Measures
450(5)
10.2.1.2 Other Quality Measures Based on Defect Counts
455(1)
10.3 Usability Measures
456(4)
10.3.1 External View of Usability
457(2)
10.3.2 Internal Attributes Affecting Usability
459(1)
10.4 Maintainability Measures
460(6)
10.4.1 External View of Maintainability
462(1)
10.4.2 Internal Attributes Affecting Maintainability
463(3)
10.5 Security Measures
466(4)
10.5.1 External View of Security
467(3)
10.5.2 Internal Attributes Affecting Security
470(1)
10.6 Summary
470(5)
Exercises
471(2)
Further Reading
473(2)
Chapter 11 Software Reliability: Measurement and Prediction
475(54)
11.1 Basics Of Reliability Theory
476(8)
11.2 The Software Reliability Problem
484(6)
11.3 Parametric Reliability Growth Models
490(7)
11.3.1 The Jelinski--Moranda Model
492(2)
11.3.2 Other Models Based on JM
494(1)
11.3.3 The Littlewood Model
495(1)
11.3.4 The Littlewood--Verrall Model
495(1)
11.3.5 Nonhomogeneous Poisson Process Models
496(1)
11.3.6 General Comments on the Models
497(1)
11.4 Predictive Accuracy
497(11)
11.4.1 Dealing with Bias: The u-Plot
499(3)
11.4.2 Dealing with Noise
502(1)
11.4.3 Prequential Likelihood Function
503(5)
11.4.4 Choosing the Best Model
508(1)
11.5 Recalibration Of Software Reliability Growth Predictions
508(9)
11.6 Importance Of The Operational Environment
517(1)
11.7 Wider Aspects Of Software Reliability
518(5)
11.8 Summary
523(6)
Exercises
523(3)
Further Reading
526(3)
Appendix: Solutions To Selected Exercises 529(26)
Bibliography 555(22)
Index 577
Norman Fenton, PhD, is a professor of risk information management at Queen Mary London University and the chief executive officer of Agena, a company that specializes in risk management for critical systems. He is renowned for his work in software engineering and software metrics. His current projects focus on using Bayesian methods of analysis to risk assessment. He has published 6 books and more than 140 refereed articles and has provided consulting to many major companies worldwide.

James M. Bieman, PhD, is a professor of computer science at Colorado State University, where he was the founding director of the Software Assurance Laboratory. His research focuses on the evaluation of software designs and processes, including ways to test nontestable software, techniques that support automated software repair, and the relationships between internal design attributes and external quality attributes. He serves on the editorial boards of the Software Quality Journal and the Journal of Software and Systems Modeling.