Automatic Classification of Program Paths Feasibility Using Active Learning
DOI:
https://doi.org/10.14738/tmlai.66.5811Keywords:
Software testing, Infeasible paths detection, Batch learning, Active learning, Paths feasibility classifier, Markov modelAbstract
One of the challenging problems that faces the automated test data generation for path testing is the existence of infeasible paths, where no input data can be found to exercise them. Substantial time and effort may be wasted in trying to generate input data to exercise such paths. This paper proposes an active-learning approach to the automatic feasibility classification of program paths. This approach is based on the hypothesis that certain features of program behavior are stochastic processes that exhibit the Markov property, and that the resultant Markov models of individual program paths can be automatically clustered into effective predictors of path feasibility. To this end, the paper presents a technique that represents program paths as Markov models, and a clustering algorithm for Markov models that aggregates them into an effective path feasibility classifier. In this approach, the classifier is a map from program path statistics, namely, edge, branch, or definition-use profiles, to a label for the path, “feasible” or “infeasible”. The presented technique employs the bootstrapping active learning strategy, where the classifier is trained incrementally on a series of labeled instances, to extend its scope of training to be able to succeed in classifying new paths. The paper also presented the results of the experiments that were conducted to evaluate the effectiveness of the three paths feasibility classifiers built by using the proposed technique, and the bootstrapping technique.
References
(1) Hedley, D., M. A. Hennell, The cause and effects of infeasible paths in computer programs, In the Proceedings of the 8th International Conference on Software Engineering, London, England, 1985, pp. 259-266.
(2) Ilgun, K., R. A. Kemmerer, and P. A. Porras, State transition analysis: A rule-based intrusion detection approach, Software Engineering, 1995, 21(3): p. 181–199.
(3) Dickinson, W., D. Leon, and A. Podgurski, Finding failures by cluster analysis of execution profiles, In Proceedings of the 23rd International Conference on Software Engineering (ICSE’01), May 2001 p. 339–348.
(4) Jha, S., K. Tan, and R. A. Maxion, Markov chains, classifiers, and intrusion detection, In Proceedings of the 14th IEEE Computer Security Foundations Workshop (CSFW’01), June 2001, p. 206–219.
(5) Ammons, G., R. Bodik, and J. R. Larus, Mining specifications, In Proceedings of the 2002 ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’02), January 2002, p. 4–16.
(6) Podgurski, A., D. Leon, P. Francis, W. Masri, M. Minch, J. Sun, and B. Wang, Automated support for classifying software failure reports, In Proceedings of the 25rd International Conference on Software Engineering (ICSE’03), May 2003, p. 465–474.
(7) Gross, K. C., S. McMaster, A. Porter, A. Urmanov, and L. Votta, Proactive system maintenance using software telemetry, In Proceedings of the 1st International Conference on Remote Analysis and Measurement of Software Systems (RAMSS’03), May 2003, p. 24–26.
(8) Brun, Y. and M. D. Ernst, Finding latent code errors via machine learning over program executions, In Proceedings of the 26th International Conference on Software Engineering, 28 May 2004, Edinburgh, UK.
(9) Cohn, D. A., L. Atlas, and R. E. Ladner, Improving generalization with active learning, Machine Learning, 1994, 15(2): p. 201–221.
(10) Bowring, J. F., J. M. Rehg, and M. J. Harrold, Active learning for automatic classification of software behavior, ACM SIGSOFT Software Engineering Notes, July 2004.
(11) Whittaker, J. A. and J. H. Poore, Markov analysis of software specifications, ACM Transactions on Software Engineering and Methodology, January 1996, 2(1):p. 93–106.
(12) Cook, J. E. and A. L. Wolf, Automating process discovery through event-data analysis, In Proceedings of the 17th International Conference on Software Engineering (ICSE’95), January 1999, p. 73–82,.
(13) Haran, M., A. Karr, A. Orso, A. Porter, and A. Sanil, Applying Classification Techniques to Remotely Collected Program Execution Data, Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations
of software engineering, ESEC/FSE-13, September 5–9, 2005, Lisbon, Portugal, p. 146-155.
(14) Haran, M., A. Karr, M. Last, A. Orso, A. A. Porter, A. Sanil, and S. Fouche, Techniques for Classifying Executions of Deployed Software to Support Software Engineering Tasks, IEEE Transactions on Software Engineering, May 2007, 33(5): p. 287-304.
(15) Lo, D., H. Cheng, J. Han, S. Khoo, and C. Sun, Classification of Software Behaviors for Failure Detection: A Discriminative Pattern Mining Approach, ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), Research Collection School of Information Systems, 2009.
(16) Francis, P., D. Leon, M. Minch, A. Podgurski, Tree-Based Methods for Classifying Software Failures, 15th International Symposium on Software Reliability Engineering (ISSRE 2004), 2-5 Nov. 2004, Saint-Malo, Bretagne, France.
(17) Harder, M., J. Mellen, and M. D. Ernst, Improving test suites via operational abstraction, In Proceedings of the 25rd International Conference on Software Engineering (ICSE’03), May 2003, p. 60–71.
(18) Munson, J. C. and S. Elbaum, Software reliability as a function of user execution patterns, In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences, January 1999.
(19) Baskiotis, N., M. Sebag, M. Gaudel, S. Gouraud, A Machine Learning Approach for Statistical Software Testing, Twentieth International Joint Conference on Artificial Intelligence, Jan 2007, Hyderabad, India.
(20) Baskiotis, N. and M. Sebag, Structural Statistical Software Testing with Active Learning in a Graph, Proceedings of the 17th international conference on Inductive logic programming, ILP'07, Corvallis, OR, USA, June 19 - 21, 2007, p. 49-62.
(21) Girgis, M. R., Using Symbolic Execution and Data Flow Criteria to Aid Test Data Selection, Software Testing, Verification and Reliability, Vol. 3, p. 101-112, 1993.
(22) Girgis, M. R., An experimental evaluation of a symbolic execution system, Software Engineering Journal, Vol. 7, No. 4, p. 285-290, 1992.
(23) Rapps, S. and E. J. Weyuker, Selecting software test data using data
flow information, IEEE Transactions on Software Engineering, 1985, 11(4): p. 367-375.
Duda, R. O., P. E. Hart, and D. G. Stork, Pattern Classification, 2001, John Wiley and Sons, Inc., New York.