e-Informatica Software Engineering Journal Automated Code Reviewer Recommendation for Pull Requests

Automated Code Reviewer Recommendation for Pull Requests

2024
[1]Mina-Sadat Moosareza and Abbas Heydarnoori, "Automated Code Reviewer Recommendation for Pull Requests", In e-Informatica Software Engineering Journal, vol. 18, no. 1, pp. 240108, 2024. DOI: 10.37190/e-Inf240108.

Download article (PDF)Get article BibTeX file

Authors

Mina-Sadat Moosareza, Abbas Heydarnoori

Abstract

Background: With the advent of distributed software development based on pull requests, it is possible to review code changes by a~third party before integrating them into the master program in an informal and tool-based process called Modern Code Review (MCR). Effectively performing MCR can facilitate the software evolution phase by reducing post-release defects. MCR allows developers to invite appropriate reviewers to inspect their code once a~pull request has been submitted. In many projects, selecting the right reviewer is time-consuming and challenging due to the high requests volume and potential reviewers. Various recommender systems have been proposed in the past that use heuristics, machine learning, or social networks to automatically suggest reviewers. Many previous approaches focus on a~narrow set of features of candidate reviewers, including their reviewing expertise, and some have been evaluated on small datasets that do not provide generalizability. Additionally, it is common for them not to meet the desired accuracy, precision, or recall standards.

Aim: Our aim is to increase the accuracy of code reviewer recommendations by calculating scores relatively and considering the importance of the recency of activities in an optimal way.

Method: Our work presents a~heuristic approach that takes into account both candidate reviewers’ expertise in reviewing and committing, as well as their social relations to automatically recommend code reviewers. During the development of the approach, we will examine how each of the reviewers’ features contributes to their suitability to review the new request.

Results: We evaluated our algorithm on five open-source projects from GitHub. Results indicate that, based on top-1 accuracy, 3-top accuracy, and mean reciprocal rank, our proposed approach achieves 46%, 75%, and 62% values respectively, outperforming previous related works.

Conclusion: These results indicate that combining different features of reviewers, including their expertise level and previous collaboration history, can lead to better code reviewer recommendations, as demonstrated by the achieved improvements over previous related works.

Keywords

Automated code reviewer recommendation, Modern code review, Heuristic algorithms.

References

1. P. Thongtanunam, C. Tantithamthavorn, R.G. Kula, N. Yoshida, H. Iida et al., “Who should review my code? A file location-based code-reviewer recommendation approach for modern code review,” in 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering, 2015, pp. 141–150.

2. S. McIntosh, Y. Kamei, B. Adams, and A.E. Hassan, “An empirical study of the impact of modern code review practices on software quality,” Empirical Software Engineering, Vol. 5, No. 21, 2016, pp. 2146–2189.

3. M. Bahrami Zanjani, H. Kagdi, and C. Bird, “Automatically recommending peer reviewers in modern code review,” IEEE Transactions on Software Engineering, Vol. 42, No. 6, 2016, pp. 530–543.

4. J. Lipcak and B. Rossi, “A large-scale study on source code reviewer recommendation,” in 44th Euromicro Conference on Software Engineering and Advanced Applications, 2018, pp. 378–387.

5. S. Asthana, R. Kumar, R. Bhagwan, C. Bird, C. Bansal et al., “Whodo: automating reviewer suggestions at scale,” in 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019, pp. 937–945.

6. S. Rebai, S. Molaei, A. Amich, M. Kessentini, and R. Kazman, “Multi-objective code reviewer recommendations: balancing expertise, availability and collaborations,” Automated Software Engineering, Vol. 27, No. 3, 2020, pp. 301–328.

7. M. Chouchen, A. Ouni, M.W. Mkaouer, R.G. Kula, and K. Inoue, “Recommending peer reviewers in modern code review: a multi-objective search-based approach,” in Genetic and Evolutionary Computation Conference Companion, 2020, pp. 307–308.

8. E. Mirsaeedi and P.C. Rigby, “Mitigating turnover with code review recommendation: balancing expertise, workload, and knowledge distribution,” in 42nd ACM/IEEE International Conference on Software Engineering, 2020, pp. 1183–1195.

9. W.H.A. Al-Zubaidi, P. Thongtanunam, H.K. Dam, C. Tantithamthavorn, and A. Ghose, “Workload-aware reviewer recommendation using a multi-objective search-based approach,” in 16th ACM International Conference on Predictive Models and Data Analytics in Software Engineering, 2020, pp. 21–30.

10. A. Chueshev, J. Lawall, R. Bendraou, and T. Ziadi, “Expanding the number of reviewers inopen-source projects by recommendingappropriate developers,” in IEEE International Conference on Software Maintenance and Evolution, 2020, pp. 499–510.

11. P. Thongtanunam, S. McIntosh, A.E. Hassan, and H. Iida, “Revisiting code ownership and its relationship with software quality in thescope of modern code review,” in 38th International Conference on Software Engineering, 2016, pp. 1039–1050.

12. A. Ouni, R.G. Kula, and K. Inoue, “Search-based peerreviewers recommendation in modern code review,” in IEEE International Conference on Software Maintenance and Evolution, 2016, pp. 367–377.

13. O. Kononenko, O. Baysal, L. Guerrouj, Y. Cao, and M.W. Godfrey, “Investigating code review quality: Do people and participation matter?” in IEEE International Conference on Software Maintenance and Evolution, 2015, pp. 111–120.

14. O. Baysal, O. Kononenko, R. Holmes, and M.W. Godfrey, “The influence of non-technical factors on code review,” in 20th Working Conference on Reverse Engineering, 2013, pp. 122–131.

15. M.M. Rahman, C.K. Roy, and R.G. Kula, “Predicting usefulness of code review commentsusing textual features and developer experience,” in 14th IEEE/ACM International Conference on Mining Software Repositories, 2017, pp. 215–226.

16. C. Bird, N. Nagappan, B. Murphy, H. Gall, and P. Devanbu, “Don’t touch my code!: examining the effects ofownership on software quality,” in 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of Software Engineering, 2011, pp. 4–14.

17. S. Ruangwan, P. Thongtanunam, A. Ihara, and K. Matsumoto, “The impact of human factors on the participation decision of reviewers in modern code review,” Empirical Software Engineering, Vol. 24, No. 2, 2019, pp. 1016–2019.

18. A. Bosu, J.C. Carver, C. Bird, J. Orbeck, and C. Chockley, “Process aspects and social dynamics of contemporarycode review: Insights from open source development and industrial practice at microsoft,” IEEE Transactions on Software Engineering, Vol. 43, No. 1, 2016, pp. 56–75.

19. Y. Yu, H. Wang, G. Yin, and T. Wang, “Reviewer recommendation for pull-requests in github: What can welearn from code review and bug assignment?” Information and Software Technology, Vol. 74, 2016, pp. 204–218.

20. Y. Hu, J. Wang, J. Hou, S. Li, and Q. Wang, “Is there a “golden” rule for code reviewer recommendation?:—an experimental evaluation,” in 20th IEEE International Conference on Software Quality, Reliability and Security, 2020, pp. 497–508.

21. M. Chouchen, A. Ouni, M.W. Mkaouer, R.G. Kula, and K. Inoue, “Whoreview: A multi-objective search-based approach for code reviewers recommendation in modern code review,” Applied Soft Computing, Vol. 100, No. 106908, 2021.

22. E. Sülün, E. Tüzün, and U. Doğrusöz, “Reviewer recommendation using software artifact traceabilitygraphs,” in 15th International Conference on Predictive Models and Data Analytics in Software Engineering, 2019, pp. 66–75.

23. E. Sülün, E. Tüzün, and U. Doğrusöz, “Rstrace+: Reviewer suggestion using software artifact traceability graphs,” Vol. 130, No. 106455, 2021.

24. X. Xia, D. Lo, X. Wang, and X. Yang, “Who should review this change? Putting text and file location analyses together for more accurate recommendations,” in IEEE International Conference on Software Maintenance and Evolution, 2015, pp. 261–270.

25. N. Sadman, M.M. Ahsan, and M.A.P. Mahmud, “Adcr: An adaptive tool to select appropriate developer for code review based on code context,” in 11th IEEE Annual Ubiquitous Computing, aghas & Mobile Communication Conference, 2020, pp. 583–591.

26. X. Ye, Y. Zheng, W. Aljedaani, and M.W. Mkaouer, “Recommending pull request reviewers based on code changes,” Soft Computing, Vol. 25, No. 7, 2021, pp. 5619–5632.

27. J. Zhang, C. Maddila, R. Bairi, C. Bird, U. Raizada et al., “Using large-scale heterogeneous graph representation learning for code review recommendations at microsoft,” in 45th IEEE/ACM International Conference on Software Engineering: Software Engineering in Practice, 2023, pp. 162–172.

28. Y. Yu, H. Wang, G. Yin, and C.X. Ling, “Reviewer recommender of pull-requests in github,” in IEEE International Conference on Software Maintenance and Evolution, 2014, pp. 609–612.

29. G. Rong, Y. Zhang, L. Yang, F. Zhang, H. Kuang et al., “Modeling review history for reviewer recommendation: A hypergraph approach,” in 44th ACM International Conference on Software Engineering, 2022, pp. 1381––1392.

30. C. Yang, X.h. Zhang, L.b. Zeng, Q. Fan, T. Wang et al., “Revrec:a two-layer reviewer recommendation algorithm in pull-based development mode,” Central South University, Vol. 25, No. 5, 2018, pp. 1129–1143.

31. L. Zhifang, W. Zexuan, W. Jinsong, Z. Yan, L. Junyi et al., “Tirr: A code reviewer recommendation algorithm with topic model and reviewer influence,” in IEEE Global Communications Conference, 2019, pp. 1–6.

32. Z. Xia, H. Sun, J. Jiang, X. Wang, and X. Liu, “A hybrid approach to code reviewer recommendation with collaborative filtering,” in 6th International Workshop on Software Mining, 2017, pp. 24–31.

33. P. Pandya and S. Tiwari, “Corms: A github and gerrit based hybrid code reviewer recommendation approach for modern code review,” in 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2022, pp. 546–557.

34. N. Assavakamhaenghan, R. Gaikovina, and K. Matsumoto, “Interactive chatbots for software engineering: A case study of code reviewer recommendation.” in 22nd IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, 2021, pp. 262–266.

35. D. Kong, Q. Chen, L. Bao, C. Sun, X. Xia et al., “Recommending code reviewers for proprietary software projects: A large scale study,” in IEEE International Conference on Software Analysis, Evolution and Reengineering, 2022, pp. 630–640.

© 2015-2024 by e-Informatyka.pl, All rights reserved.

Built on WordPress Theme: Mediaphase Lite by ThemeFurnace.