Muutke küpsiste eelistusi

Automated Defect Prevention: Best Practices in Software Management [Kõva köide]

  • Formaat: Hardback, 454 pages, kõrgus x laius x paksus: 243x162x27 mm, kaal: 767 g
  • Sari: IEEE Press
  • Ilmumisaeg: 09-Oct-2007
  • Kirjastus: John Wiley & Sons Inc
  • ISBN-10: 0470042125
  • ISBN-13: 9780470042120
Teised raamatud teemal:
  • Formaat: Hardback, 454 pages, kõrgus x laius x paksus: 243x162x27 mm, kaal: 767 g
  • Sari: IEEE Press
  • Ilmumisaeg: 09-Oct-2007
  • Kirjastus: John Wiley & Sons Inc
  • ISBN-10: 0470042125
  • ISBN-13: 9780470042120
Teised raamatud teemal:
Huizinga (computer science, California State University) and Kolawa, a software consultant, describe an approach to software management based on establishing an infrastructure that serves as the foundation for the project. This infrastructure functions as a software 'production line' that automates repetitive tasks, organizes project activities, tracks project status, and collects project data. Ideas are illustrated with b&w figures showing how to structure projects, along with real-world examples, developers' testimonies, and tips on implementing strategies across a project group. The book concludes with two case studies, and appendices on software development models and software engineering tools. The readership includes software project managers, architects, developers, testers, and QA professionals. The book is also appropriate for upper level undergraduates and graduate students in courses in computer science, software engineering, and information systems. Background in basic software engineering concepts is assumed. Annotation ©2007 Book News, Inc., Portland, OR (booknews.com)

This book describes an approach to software management based on establishing an infrastructure that serves as the foundation for the project. This infrastructure defines people roles, necessary technology, and interactions between people and technology. This infrastructure automates repetitive tasks, organizes project activities, tracks project status, and seamlessly collects project data to provide measures necessary for decision making. Most importantly, this infrastructure sustains and facilitates the improvement of human-defined processes.

The methodology described in the book, which is called Automated Defect Prevention (ADP) stands out from the current software landscape as a result of two unique features: its comprehensive approach to defect prevention, and its far-reaching emphasis on automation. ADP is a practical and thorough guide to implementing and managing software projects and processes. It is a set of best practices for software management through process improvement, which is achieved by the gradual automation of repetitive tasks supported and sustained by this flexible and adaptable infrastructure, an infrastructure that essentially forms a software production line.

In defining the technology infrastructure, ADP describes necessary features rather than specific tools, thus remaining vendor neutral. Only a basic subset of features that are essential for building an effective infrastructure has been selected. Many existing commercial and non-commercial tools support these, as well as more advanced features. Appendix E contains such a list.

Preface xix
Features and Organization xxi
Practice Descriptions xxiii
Intended Audience xxiv
Acknowledgments xxiv
Permissions xxvi
Disclaimer xxvi
The Case for Automated Defect Prevention
1(18)
What Is ADP?
1(2)
What Are the Goals of ADP?
3(5)
People: Stimulated and Satisfied
3(1)
Product: High Quality
4(1)
Organization: Increased Productivity and Operational Efficiency
5(1)
Process: Controlled, Improved, and Sustainable
6(1)
Project: Managed through Informed Decision Making
7(1)
How Is ADP Implemented?
8(3)
Principles
8(1)
Practices
9(1)
Policies
9(1)
Defect Prevention Mindset
10(1)
Automation
11(1)
From the Waterfall to Modern Software Development Process Models
11(2)
Acronyms
13(1)
Glossary
13(2)
References
15(1)
Exercises
16(3)
Principles of Automated Defect Prevention
19(34)
Introduction
19(2)
Defect Prevention: Definition and Benefits
21(3)
Historical Perspective: Defect Analysis and Prevention in the Auto Industry---What Happened to Deming?
24(2)
Principles of Automated Defect Prevention
26(12)
Principle 1---Establishment of Infrastructure: ``Build a Strong Foundation through Integration of People and Technology''
26(2)
Principle 2---Application of General Best Practices: ``Learn from Others' Mistakes''
28(1)
Principle 3---Customization of Best Practices: ``Learn from Your Own Mistakes and Improve the Process''
29(2)
Principle 4---Measurement and Tracking of Project Status: ``Understand the Past and Present to Make Decisions about the Future''
31(1)
Principle 5---Automation: ``Let the Computer Do It''
32(4)
Principle 6---Incremental Implementation of ADP's Practices and Policies
36(2)
Automated Defect Prevention-Based Software Development Process Model
38(3)
Examples
41(7)
Focus on Root Cause Analysis of a Defect
41(3)
Focus on Infrastructure
44(1)
Focus on Customized Best Practice
45(2)
Focus on Measurements of Project Status
47(1)
Acronyms
48(1)
Glossary
48(2)
References
50(1)
Exercises
51(2)
Initial Planning and Infrastructure
53(32)
Introduction
53(1)
Initial Software Development Plan
54(2)
Product
54(1)
People
54(1)
Technology
55(1)
Process
55(1)
Best Practices for Creating People Infrastructure
56(7)
Defining Groups
56(2)
Determining a Location for Each Group's Infrastructure
58(1)
Defining People Roles
58(3)
Establishing a Training Program
61(1)
Cultivating a Positive Group Culture
61(2)
Best Practices for Creating Technology Infrastructure
63(12)
Automated Reporting System
63(3)
Policy for Use of Automated Reporting System
66(2)
Minimum Technology Infrastructure
68(4)
Intermediate Technology Infrastructure
72(1)
Expanded Technology Infrastructure
73(2)
Integrating People and Technology
75(2)
Human Factors and Concerns
77(1)
Examples
78(2)
Focus on Developer Ideas
78(1)
Focus on Reports Generated by the Minimum Infrastructure
79(1)
Acronyms
80(1)
Glossary
81(1)
References
82(1)
Exercises
83(2)
Requirements Specification and Management
85(34)
Introduction
85(2)
Best Practices for Gathering and Organizing Requirements
87(18)
Creating the Product Vision and Scope Document
87(2)
Gathering and Organizing Requirements
89(4)
Prioritizing Requirements
93(2)
Developing Use Cases
95(3)
Creating a Prototype to Elicit Requirements
98(2)
Creating Conceptual Test Cases
100(1)
Requirements Documents Inspection
101(2)
Managing Changing Requirements
103(2)
Best Practices in Different Environments
105(2)
Existing versus New Software Project
105(1)
In-House versus Outsourced Development Teams
106(1)
Policy for Use of the Requirements Management System
107(3)
Measurements Related to Requirements Management System
109(1)
Tracking of Data Related to the Requirements Management System
110(1)
Examples
110(5)
Focus on Customized Best Practice
110(2)
Focus on Monitoring and Managing Requirement Priorities
112(2)
Focus on Change Requests
114(1)
Acronyms
115(1)
Glossary
115(1)
References
116(1)
Exercises
116(3)
Extended Planning and Infrastructure
119(46)
Introduction
119(1)
Software Development Plan
120(1)
Defining Project Objectives
121(3)
Defining Project Artifacts and Deliverables
124(4)
The Vision and Scope Document and Project Objectives
124(1)
SRS, Describing the Product Key Features
125(1)
Architectural and Detailed Design Documents and Models
125(1)
List of COTS (Commercial Off-the-Shelf Components) Used
125(1)
Source and Executable Code
126(1)
Test Plan
126(1)
Acceptance Plan
126(1)
Periodic Reports Generated by the Reporting System
126(1)
Deployment Plan
127(1)
User and Operational Manuals
127(1)
Customer Training Plan
127(1)
Selecting a Software Development Process Model
128(1)
Defining Defect Prevention Process
129(1)
Managing Risk
129(3)
Managing Change
132(1)
Defining Work Breakdown Structure---An Iterative Approach
132(3)
Best Practices for Estimating Project Effort
135(11)
Estimation by Using Elements of Wideband Delphi
137(1)
Estimation by Using Effort Analogy
138(3)
Estimation by Using Parametric Models
141(4)
Estimations of Using COTS and Code Reuse
145(1)
Quality of Estimation and the Iterative Adjustments of Estimates
145(1)
Best Practices for Preparing the Schedule
146(3)
Measurement and Tracking for Estimation
149(1)
Identifying Additional Resource Requirements
150(7)
Extending the Technology Infrastructure
151(5)
Extending the People Infrastructure
156(1)
Examples
157(3)
Focus on the Root Cause of a Project Scheduling Problem
157(1)
Focus on Organizing and Tracking Artifacts
158(1)
Focus on Scheduling and Tracking Milestones
158(2)
Acronyms
160(1)
Glossary
160(2)
References
162(1)
Exercises
163(2)
Architectural and Detailed Design
165(42)
Introduction
165(3)
Best Practices for Design of System Functionality and Its Quality Attributes
168(21)
Identifying Critical Attributes of Architectural Design
168(4)
Defining the Policies for Design of Functional and Nonfunctional Requirements
172(3)
Applying Design Patterns
175(3)
Service-Oriented Architecture
178(1)
Mapping Requirements to Modules
178(3)
Designing Module Interfaces
181(1)
Modeling Modules and Their Interfaces
182(3)
Defining Application Logic
185(1)
Refining Test Cases
186(1)
Design Document Storage and Inspection
187(1)
Managing Changes in Design
188(1)
Best Practices for Design of Graphical User Interface
189(9)
Identifying Critical Attributes of User Interface Design
190(3)
Defining the User Interface Design Policy
193(2)
Identifying Architectural Patterns Applicable to the User Interface Design
195(1)
Creating Categories of Actions
195(1)
Dividing Actions into Screens
196(1)
Prototyping the Interface
197(1)
Testing the Interface
197(1)
Examples
198(2)
Focus on Module Assignments and Design Progress
198(1)
Focus on the Number of Use Cases per Module
198(1)
Focus on Module Implementation Overview
199(1)
Focus on Customized Best Practice for GUI Design
199(1)
Acronyms
200(1)
Glossary
201(3)
References
204(1)
Exercises
205(2)
Construction
207(42)
Introduction
207(2)
Best Practices for Code Construction
209(20)
Applying Coding Standards throughout Development
210(15)
Applying the Test-First Approach at the Service and Module Implementation Level
225(1)
Implementing Service Contracts and/or Module Interfaces before Their Internal Functionality
226(1)
Applying Test Driven Development for Algorithmically Complex and Critical Code Units
227(1)
Conducting White Box Unit Testing after Implementing Each Unit and before Checking the Code into the Source Control System
228(1)
Verifying Code Consistency with the Requirements and Design
228(1)
Policy for Use of the Code Source Control System
229(8)
Measurements Related to Source Control
234(2)
Tracking of Source Control Data
236(1)
Policy for Use of Automated Build
237(4)
Measurements Related to Automated Builds
240(1)
Tracking of Data Related to Automated Builds
241(1)
Examples
241(2)
Focus on a Customized Coding Standard Policy
241(1)
Focus on Features/Tests Reports
242(1)
Acronyms
243(1)
Glossary
244(1)
References
245(2)
Exercises
247(2)
Testing and Defect Prevention
249(38)
Introduction
249(1)
Best Practices for Testing and Code Review
250(21)
Conducting White Box Unit Testing: Bottom-Up Approach
251(5)
Conducting Black Box Testing and Verifying the Convergence of Top-Down and Bottom-Up Tests
256(4)
Conducting Code Reviews as a Testing Activity
260(3)
Conducting Integration Testing
263(2)
Conducting System Testing
265(3)
Conducting Regression Testing
268(2)
Conducting Acceptance Testing
270(1)
Defect Analysis and Prevention
271(2)
Policy for Use of Problem Tracking System
273(5)
Measurements of Data Related to the Problem Tracking System
277(1)
Tracking of Data Related to the Problem Tracking System
277(1)
Policy for Use of Regression Testing System
278(1)
Measurements Related to the Regression Testing System
279(1)
Tracking of Data Related to the Regression Testing System
279(1)
Examples
279(4)
Focus on Defect Tracking Reports
279(1)
Focus on Test Type Reports
280(1)
Example of a Root Cause Analysis of a Design and Testing Defect
280(3)
Acronyms
283(1)
Glossary
283(2)
References
285(1)
Exercises
286(1)
Trend Analysis and Deployment
287(24)
Introduction
287(1)
Trends in Process Control
288(2)
Process Variations
288(1)
Process Stabilization
288(1)
Process Capability
289(1)
Trends in Project Progress
290(11)
Analyzing Features/Requirements Implementation Status
290(4)
Analyzing Source Code Growth
294(2)
Analyzing Test Results
296(3)
Analyzing Defects
299(2)
Analyzing Cost and Schedule
301(1)
Best Practices for Deployment and Transition
301(8)
Deployment to a Staging System
301(2)
Automation of the Deployment Process
303(1)
Assessing Release Readiness
304(3)
Release: Deployment to the Production System
307(1)
Nonintrusive Monitoring
307(2)
Acronyms
309(1)
Glossary
309(1)
References
309(1)
Exercises
310(1)
Managing External Factors
311(30)
Introduction
311(1)
Best Practices for Managing Outsourced Projects
312(10)
Establishing a Software Development Outsource Process
313(1)
Phase 0: Decision to Outsource
314(1)
Phase 1: Planning
315(4)
Phase 2: Implementation
319(2)
Phase 3: Termination
321(1)
Best Practices for Facilitating IT Regulatory Compliance
322(6)
Section 508 of the U.S. Rehabilitation Act
322(1)
Sarbanes-Oxley Act of 2002
323(5)
Best Practices for Implementation of CMMI
328(9)
Capability and Maturity Model Integration (CMMI)
329(1)
Staged Representation
330(1)
Putting Staged Representation-Based Improvement into Practice Using ADP
330(6)
Putting Continuous Representation-Based Improvement into Practice Using ADP
336(1)
Acronyms
337(1)
Glossary
337(1)
References
338(1)
Exercises
339(2)
Case Study: Automation as an Agent of Change
341(12)
Case Study: Implementing Java Coding Standards in a Financial Application
341(11)
Company Profile
341(1)
Problems
342(2)
Solution
344(4)
Data Collected
348(3)
The Bottom Line Results---Facilitating Change
351(1)
Acronyms
352(1)
Glossary
352(1)
References
352(1)
APPENDIX A: A BRIEF SURVEY OF MODERN SOFTWARE DEVELOPMENT PROCESS MODELS
353(8)
Introduction
353(1)
Rapid Application Development (RAD) and Rapid Prototyping
353(2)
Incremental Development
355(1)
Spiral Model
355(2)
Object-Oriented Unified Process
357(1)
Extreme and Agile Programming
358(1)
References
359(2)
APPENDIX B: MARS POLAR LANDER (MPL): LOSS AND LESSONS
361(8)
Gordon Hebert
No Definite Root Cause
361(1)
No Mission Data
362(2)
Root Cause Revisited
364(2)
References
366(3)
APPENDIX C: SERVICE-ORIENTED ARCHITECTURE: EXAMPLE OF AN IMPLEMENTATION WITH ADP BEST PRACTICES
369(22)
Introduction
369(1)
Web Service Creation: Initial Planning and Requirements
369(2)
Functional Requirements
370(1)
Nonfunctional Requirements
371(1)
Web Service Creation: Extended Planning and Design
371(4)
Initial Architecture
371(2)
Extended Infrastructure
373(1)
Design
374(1)
Web Service Creation: Construction and Testing, Stage 1---Module Implementation
375(6)
Applying Coding Standards
376(1)
Implementing Interfaces and Applying a Test-First Approach for Modules and Submodules
376(1)
Generating White Box JUnit Tests
377(1)
Gradually Implementing the Submodule until All Junit Tests Pass and Converge with the Original Black Box Tests
378(2)
Checking Verified Tests into the Source Control System and Running Nightly Regression Tests
380(1)
Web Service Creation: Construction and Testing, Stage 2---The WSDL Document Implementation
381(3)
Creating and Deploying the WSDL Document on the Staging Server as Part of the Nightly Build Process
381(2)
Avoiding Inline Schemas when XML Validation Is Required
383(1)
Avoiding Cyclical Referencing when Using Inline Schemas
383(1)
Verifying WSDL Document for XML Validity
384(1)
Avoiding ``Encoded'' Coding Style by Checking Interoperability
384(1)
Creating Regression Tests for the WSDL Documents and Schemas to Detect Undesired Changes
384(1)
Web Service Creation: Server Deployment
384(2)
Deploying the Web Service to a Staging Server as Part of the Nightly Build Process
384(1)
Executing Web Service Tests That Verify the Functionality of the Web Service
385(1)
Creating Scenario-Based Tests and Incorporating Them Into the Nightly Test Process
386(1)
Database Testing
386(1)
Web Service Creation: Client Deployment
386(1)
Implementing the Client According to the WSDL Document Specification
386(1)
Using Server Stubs to Test Client Functionality---Deploying the Server Stub as Part of the Nightly Deployment Process
387(1)
Adding Client Tests into the Nightly Test Process
387(1)
Web Service Creation: Verifying Security
387(2)
Determining the Desired Level of Security
387(1)
Deploying Security-Enabled Web Service on Additional Port of the Staging Server
388(1)
Leveraging Existing Tests: Modifying Them to Test for Security and Incorporating Them into the Nightly Test Process
388(1)
Web Service Creation: Verifying Performance through Continuous Performance/Load Testing
389(2)
Starting Load Testing as Early as Possible and Incorporating It into the Nightly Test Process
389(1)
Using Results of Load Tests to Determine Final Deployment Configuration
389(2)
APPENDIX D: AJAX BEST PRACTICE: CONTINUOUS TESTING
391(4)
Why Ajax?
391(1)
Ajax Development and Testing Challenges
392(1)
Continuous Testing
392(3)
APPENDIX E: SOFTWARE ENGINEERING TOOLS
395(6)
Glossary 401(14)
Index 415


Dorota Huizinga, PhD, is the Associate Dean for the College of Engineering and Computer Science and Professor of Computer Science at California State University, Fullerton. Her publication record spans a wide range of computer science disciplines and her research was sponsored by the National Science Foundation, California State University System, and private industry.

Adam Kolawa, PhD, is the cofounder and CEO of Parasoft, a leading provider of Automated Error Prevention software solutions. Dr. Kolawa is a coauthor of Bulletproofing Web Applications, has contributed to or written more than 100 commentary pieces and technical papers, and has authored numerous scientific papers.