|
1 Selection of Significant Metrics for Improving the Performance of Change-Proneness Modules |
|
|
1 | (18) |
|
|
1 | (3) |
|
|
4 | (1) |
|
1.3 Framework for Software Metrics Validation |
|
|
4 | (2) |
|
1.4 Classification Techniques |
|
|
6 | (1) |
|
|
7 | (1) |
|
|
8 | (8) |
|
1.6.1 Artificial Neural Network (ANN) Model |
|
|
9 | (3) |
|
1.6.2 Ensembles of Classification Models |
|
|
12 | (1) |
|
1.6.3 Comparison of Results |
|
|
13 | (3) |
|
|
16 | (3) |
|
|
17 | (2) |
|
2 Effort Estimation of Web Based Applications Using ERD, Use Case Point Method and Machine Learning |
|
|
19 | (20) |
|
|
19 | (2) |
|
|
21 | (2) |
|
2.3 Overview of the Chapter |
|
|
23 | (1) |
|
|
24 | (1) |
|
|
24 | (1) |
|
2.4 Proposed Work with Estimating the Size |
|
|
24 | (6) |
|
|
24 | (3) |
|
|
27 | (3) |
|
2.5 Effort Estimation of the Web-Based Application |
|
|
30 | (2) |
|
2.6 Effort Estimation Using Machine Learning |
|
|
32 | (3) |
|
2.6.1 Data Set Used for Training Purposes |
|
|
32 | (1) |
|
2.6.2 Support Vector Machine (SVM) |
|
|
32 | (2) |
|
2.6.3 Nearest Neighbour Algorithm |
|
|
34 | (1) |
|
2.7 Conclusion and Future Work |
|
|
35 | (4) |
|
|
36 | (3) |
|
3 Usage of Machine Learning in Software Testing |
|
|
39 | (16) |
|
|
40 | (1) |
|
3.2 Background: Software Vulnerability Analysis and Discovery |
|
|
40 | (4) |
|
|
40 | (1) |
|
3.2.2 Completeness, Soundness and Undecidable |
|
|
41 | (1) |
|
3.2.3 Conventional Approaches |
|
|
42 | (1) |
|
3.2.4 Categorizing Previous Work |
|
|
43 | (1) |
|
3.3 Vulnerability Prediction Based on Software Metrics |
|
|
44 | (1) |
|
3.4 Anomaly Detection Approaches |
|
|
44 | (1) |
|
3.5 Vulnerable Code Pattern Recognition |
|
|
45 | (1) |
|
3.6 System and Method for Automated Software Testing Based on Machine Learning |
|
|
46 | (6) |
|
3.6.1 System for Automated Software Testing |
|
|
46 | (2) |
|
3.6.2 Method for Automated Software Testing |
|
|
48 | (4) |
|
3.7 Summary of the Techniques |
|
|
52 | (1) |
|
|
53 | (2) |
|
|
53 | (2) |
|
4 Test Scenarios Generation Using Combined Object-Oriented Models |
|
|
55 | (18) |
|
|
55 | (2) |
|
|
57 | (3) |
|
|
57 | (1) |
|
|
57 | (1) |
|
4.2.3 State Machine Diagram |
|
|
58 | (1) |
|
4.2.4 Test Coverage Criteria |
|
|
59 | (1) |
|
4.2.5 Bio Inspired Meta-heuristic Algorithm for Object-Oriented Testing |
|
|
60 | (1) |
|
|
60 | (1) |
|
|
61 | (3) |
|
4.4.1 Construction of the Sequence and State-Machine Diagram of System |
|
|
62 | (1) |
|
4.4.2 Generate Individual Graphs of the Sequence and State-Machine Diagram |
|
|
62 | (1) |
|
4.4.3 Generate Combined Intermediate Graph Named as State-Sequence Intermediate Graph |
|
|
63 | (1) |
|
4.4.4 Generate the Test Scenarios SSIG |
|
|
63 | (1) |
|
4.5 Implementation and Result Analysis |
|
|
64 | (6) |
|
4.5.1 Generate the Test Scenarios SSIG |
|
|
66 | (1) |
|
4.5.2 Construction of Combined State-Sequence Intermediate Graph (SSIG) |
|
|
67 | (1) |
|
4.5.3 Generation of Test Scenarios |
|
|
67 | (1) |
|
4.5.4 Observed Test Cases |
|
|
68 | (1) |
|
|
68 | (2) |
|
|
70 | (3) |
|
|
70 | (3) |
|
5 A Novel Approach of Software Fault Prediction Using Deep Learning Technique |
|
|
73 | (20) |
|
|
74 | (1) |
|
|
75 | (1) |
|
|
76 | (6) |
|
|
76 | (1) |
|
5.3.2 Basic Deep Learning Terminologies |
|
|
77 | (3) |
|
5.3.3 Deep Learning Models |
|
|
80 | (2) |
|
5.4 Fault Localization Using CNN |
|
|
82 | (7) |
|
|
83 | (1) |
|
5.4.2 Experimental Setup and Example Program |
|
|
84 | (3) |
|
5.4.3 Procedure of Fault Localization Using CNN |
|
|
87 | (2) |
|
5.5 Conclusion and Future Work |
|
|
89 | (4) |
|
|
90 | (3) |
|
6 Feature-Based Semi-supervised Learning to Detect Malware from Android |
|
|
93 | |
|
|
94 | (1) |
|
|
95 | (5) |
|
|
99 | (1) |
|
|
100 | (2) |
|
6.4 Feature Sub-set Selection Approaches |
|
|
102 | (2) |
|
6.4.1 Consistency Sub-set Evaluation Approach |
|
|
102 | (1) |
|
6.4.2 Filtered Sub-set Evaluation |
|
|
103 | (1) |
|
6.4.3 Rough Set Analysis Approach |
|
|
103 | (1) |
|
6.4.4 Feature Sub-set Selection Approach Based on Correlation |
|
|
104 | (1) |
|
6.5 Machine Learning Classifiers |
|
|
104 | (2) |
|
6.6 Proposed Detection Framework |
|
|
106 | (1) |
|
6.7 Evaluation of Parameters |
|
|
106 | (1) |
|
|
107 | (1) |
|
6.9 Outcomes of the Experiment |
|
|
108 | |
|
6.9.1 Feature Sub-set Selection Approaches |
|
|
108 | (1) |
|
6.9.2 Machine Learning Classifier |
|
|
109 | (3) |
|
6.9.3 Comparison of Outcomes |
|
|
112 | (1) |
|
6.9.4 Evaluation of Proposed Framework Using Proposed Detection Framework |
|
|
113 | (1) |
|
6.9.5 Experimental Finding |
|
|
114 | (1) |
|
|
114 | (1) |
|
|
115 | |