2021 | |
[1] | "Software Deterioration Control Based on Issue Reports", In e-Informatica Software Engineering Journal, vol. 15, no. 1, pp. 115–132, 2021.
DOI: , 10.37190/e-Inf210106. Download article (PDF)Get article BibTeX file |
Authors
Omid Bushehrian, Mohsen Sayari, Pirooz Shamsinejad
Abstract
Introduction: Successive code changes during the maintenance phase may cause the emergence of bad smells and anti-patterns in code and gradually results in deterioration of the code and difficulties in its maintainability. Continuous Quality Control (QC) is essential in this phase to refactor the anti-patterns and bad smells.
Objectives: The objective of this research has been to present a novel component called Code Deterioration Watch (CDW) to be integrated with existing Issue Tracking Systems (ITS) in order to assist the QC team in locating the software modules most vulnerable to deterioration swiftly. The important point regarding the CDW is the fact that its function has to be independent of the code level metrics rather it is totally based on issue level metrics measured from ITS repositories.
Methods: An issue level metric that properly alerts us of bad-smell emergence was identified by mining software repositories. To measure that metric, a Stream Clustering algorithm called ReportChainer was proposed to spot Relatively Long Chains (RLC) of incoming issue reports as they tell the QC team that a concentrated point of successive changes has emerged in the software.
Results: The contribution of this paper is partly creating a huge integrated code and issue repository of twelve medium and large size open-source software products from Apache and Eclipse. By mining this repository it was observed that there is a strong direct correlation (0.73 on average) between the number of issues of type “New Feature” reported on a software package and the number of bad-smells of types “design” and “error prone” emerged in that package. Besides a strong direct correlation (0.97 on average) was observed between the length of a chain and the magnitude of times it caused changes to a software package.
Conclusion: The existence of direct correlation between the number of issues of type “New
Feature” reported on a software package and (1) the number of bad-smells of types “design” and “error prone” and (2) the value of “CyclomaticComplexity” metric of the package, justifies the idea of Quality Control merely based on issue-level metrics. A stream clustering algorithm can be effectively applied to alert the emergence of a deteriorated module.
Keywords
Code Smells, Issue report, maintainability, document classification
References
1. Bugzilla . [Online]. http://www.bugzilla.org (Accessed on 2018-06-06).
2. Atlassian . [Online]. https://www.atlassian.com/software/jira (Accessed on 2018-06-06).
3. L. Yu, S. Ramaswamy, and A. Nair, “Using bug reports as a software quality measure,” 2013.
4. M. Badri, N. Drouin, and F. Touré, “On understanding software quality evolution from a defect perspective: A case study on an open source software system,” in International Conference on Computer Systems and Industrial Informatics . IEEE, 2012, pp. 1–6.
5. C. Chen, S. Lin, M. Shoga, Q. Wang, and B. Boehm, “How do defects hurt qualities? An empirical study on characterizing a software maintainability ontology in open source software,” in International Conference on Software Quality, Reliability and Security (QRS) , 2018, pp. 226–237.
6. Standard for Software Maintenance , IEEE Std. 1219-1998, 1998.
7. L. Yu, S. Schach, and K. Chen, “Measuring the maintainability of open-source software,” in International Symposium on Empirical Software Engineering , 2005, p. 7.
8. R. Malhotra and A. Chug, “Software maintainability: Systematic literature review and current trends,” International Journal of Software Engineering and Knowledge Engineering , Vol. 26, No. 8, 2016, pp. 1221–1253.
9. H. Sharma and A. Chug, “Dynamic metrics are superior than static metrics in maintainability prediction: An empirical case study,” in 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions) . IEEE, 2015, pp. 1–6.
10. S. Shafi, S.M. Hassan, A. Arshaq, M.J. Khan, and S. Shamail, “Software quality prediction techniques: A comparative analysis,” in 4th International Conference on Emerging Technologies . IEEE, 2008, pp. 242–246.
11. P. Piotrowski and L. Madeyski, “Software defect prediction using bad code smells: A systematic literature review,” Data-Centric Business and Applications , 2020, pp. 77–99.
12. M. Fowler, Refactoring: improving the design of existing code . Addison-Wesley Professional, 2018.
13. Apache . [Online]. https://projects.apache.org/projects.html (Accessed on 2018-06-06).
14. Eclipse . [Online]. https://www.eclipse.org/ (Accessed on 2018-06-06).
15. V. Lenarduzzi, A.C. Stan, D. Taibi, D. Tosi, and G. Venters, “A dynamical quality model to continuously monitor software maintenance,” in The European Conference on Information Systems Management . Academic Conferences International Limited, 2017, pp. 168–178.
16. S. Kim, T. Zimmermann, K. Pan, and E.J. Whitehead, Jr., “Automatic identification of bug-introducing changes,” in 21st IEEE/ACM International Conference on Automated Software Engineering (ASE ’06) , 2006, pp. 81–90.
17. H. Wang, M. Kessentini, W. Grosky, and H. Meddeb, “On the use of time series and search based software engineering for refactoring recommendation,” in Proceedings of the 7th International Conference on Management of computational and collective intElligence in Digital EcoSystems , 2015, pp. 35–42.
18. F. Palomba, M. Zanoni, F.A. Fontana, A. De Lucia, and R. Oliveto, “Smells like teen spirit: Improving bug prediction performance using the intensity of code smells,” in International Conference on Software Maintenance and Evolution (ICSME) . IEEE, 2016, pp. 244–255.
19. F. Khomh, M. Di Penta, Y.G. Guéhéneuc, and G. Antoniol, “An exploratory study of the impact of antipatterns on class change-and fault-proneness,” Empirical Software Engineering , Vol. 17, No. 3, 2012, pp. 243–275.
20. A.S. Cairo, G. de F. Carneiro, and M.P. Monteiro, “The impact of code smells on software bugs: A systematic literature review,” Information , Vol. 9, No. 11, 2018, p. 273.
21. D.M. Le, D. Link, A. Shahbazian, and N. Medvidovic, “An empirical study of architectural decay in open-source software,” in International conference on software architecture (ICSA) . IEEE, 2018, pp. 176–17609.
22. PMD static code analyzer . [Online]. https://pmd.github.io/latest/pmd_rules_java.html (Accessed on 2018-06-06).
23. D. Kim, Y. Tao, S. Kim, and A. Zeller, “Where should we fix this bug? A two-phase recommendation model,” IEEE transactions on software Engineering , Vol. 39, No. 11, 2013, pp. 1597–1610.
24. N. Limsettho, H. Hata, A. Monden, and K. Matsumoto, “Unsupervised bug report categorization using clustering and labeling algorithm,” International Journal of Software Engineering and Knowledge Engineering , Vol. 26, No. 7, 2016, pp. 1027–1053.
25. F.A. Fontana and M. Zanoni, “Code smell severity classification using machine learning techniques,” Knowledge-Based Systems , Vol. 128, 2017, pp. 43–58.
26. S.A. Vidal, C. Marcos, and J.A. Díaz-Pace, “An approach to prioritize code smells for refactoring,” Automated Software Engineering , Vol. 23, No. 3, 2016, pp. 501–532.
27. T.J. McCabe, “A complexity measure,” IEEE Transactions on software Engineering , No. 4, 1976, pp. 308–320.
28. C.D. Manning, P. Raghavan, and H. Schütze, Introduction to Information Retrieval . Cambridge University Press, 2008.
29. GitHub . [Online]. https://github.com/ (Accessed on 2018-06-06).
30. isomorphic-git . [Online]. https://isomorphic-git.org/ (Accessed on 2018-06-06).
31. Apache’s JIRA issue tracker! [Online]. https://issues.apache.org/jira/secure/Dashboard.jspa (Accessed on 2018-06-06).
32. bugs.eclipse.org . [Online]. https://bugs.eclipse.org/bugs/ (Accessed on 2018-06-06).
33. NumPy . [Online]. https://numpy.org/
34. R.C. Martin, Clean code: A handbook of agile software craftsmanship . Pearson Education, 2009.
35. R.C. Martin, M. Martin, and M. Martin, Agile principles, patterns, and practices in C# . Prentice Hall, 2007.
36. E. Fernandes, J. Oliveira, G. Vale, T. Paiva, and E. Figueiredo, “A review-based comparative study of bad smell detection tools,” in Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering , EASE ’16. ACM, 2016.