2024 | |
[1] | "Activity-Based Detection of (Anti-)Patterns: An Embedded Case Study of the Fire Drill", In e-Informatica Software Engineering Journal, vol. 18, no. 1, pp. 240106, 2024.
DOI: , 10.37190/e-Inf240106. Download article (PDF)Get article BibTeX file |
Authors
Sebastian Hönel, Petr Picha, Morgan Ericsson, Premek Brada, Welf Löwe, Anna Wingkvist
Abstract
Background: Nowadays, expensive, error-prone, expert-based evaluations are needed to identify and assess software process anti-patterns. Process artifacts cannot be automatically used to quantitatively analyze and train prediction models without exact ground truth.
Aim: Develop a replicable methodology for organizational learning from process (anti-)patterns, demonstrating the mining of reliable ground truth and exploitation of process artifacts.
Method: We conduct an embedded case study to find manifestations of the Fire Drill anti-pattern in n=15 projects. To ensure quality, three human experts agree. Their evaluation and the process’ artifacts are utilized to establish a quantitative understanding and train a prediction model.
Results: Qualitative review shows many project issues. (i) Expert assessments consistently provide credible ground truth. (ii) Fire Drill phenomenological descriptions match project activity time (for example, development). (iii) Regression models trained on approx. 12–25 examples are sufficiently stable.
Conclusion: The approach is data source-independent (source code or issue-tracking). It allows leveraging process artifacts for establishing additional phenomenon knowledge and training robust predictive models. The results indicate the aptness of the methodology for the identification of the Fire Drill and similar anti-pattern instances modeled using activities. Such identification could be used in post mortem process analysis supporting organizational learning for improving processes.
Keywords
anti-patterns Fire-Drill Case-study
References
1. C.J. Neill, P.A. Laplante, and J.F. DeFranco, Antipatterns: Managing Software Organizations and People, 2nd ed. Auerbach Publications, 2011.
2. C. Alexander, S. Ishikawa, M. Silverstein, M. Jacobson, I. Fiksdahl-King et al., A Pattern Language – Towns, Buildings, Construction. Oxford University Press, 1977.
3. W.H. Brown, R.C. Malveau, H.W. McCormick III, and T.J. Mowbray, AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis. John Wiley & Sons, Inc., 1998.
4. P.A. Laplante and C.J. Neill, Antipatterns: Identification, Refactoring, and Management, 1st ed., Auerbach Series on Applied Software Engineering. CRC Press, Auerbach Publications, 2005.
5. L. Simeckova, P. Brada, and P. Picha, “SPEM-based process anti-pattern models for detection in project data,” in 46th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2020, Portoroz, Slovenia, August 26–28, 2020. IEEE, 2020, pp. 89–92.
6. I. Stamelos, “Software project management anti-patterns,” Journal of Systems and Software, Vol. 83, No. 1, 2010, pp. 52–59.
7. R.R. Nelson, “It project management: Infamous failures, classic mistakes, and best practices,” MIS Quarterly Executive, Vol. 6, No. 2, 2008. [Online]. https://aisel.aisnet.org/misqe/vol6/iss2/4
8. R.S. Kenett and E.R. Baker, Software Process Quality: Management and Control, 1st ed., Computer Aided Engineering New York, N.Y., 6. Marcel Dekker Inc., 1999.
9. C.P. Halvorsen and R. Conradi, “A taxonomy to compare SPI frameworks,” in Software Process Technology, 8th European Workshop, EWSPT 2001, Witten, Germany, June 19–21, 2001, Proceedings, Lecture Notes in Computer Science, V. Ambriola, Ed., Vol. 2077. Springer, 2001, pp. 217–235.
10. A. Birk, T. Dingsoyr, and T. Stalhane, “Postmortem: never leave a project without it,” IEEE Software, Vol. 19, No. 3, 2002, pp. 43–45.
11. W.J. Brown, H.W. McCormick III, and S.W. Thomas, AntiPatterns in Project Management. John Wiley & Sons, Inc., 2000.
12. P. Silva, A.M. Moreno, and L. Peters, “Software project management: Learning from our mistakes,” IEEE Software, Vol. 32, No. 3, 2015, pp. 40–43.
13. A. Nizam, “Software project failure process definition,” IEEE Access, Vol. 10, 2022, pp. 34 428–34 441.
14. P. Brada and P. Picha, “Software process anti-patterns catalogue,” in Proceedings of the 24th European Conference on Pattern Languages of Programs, EuroPLoP 2019, Irsee, Germany, July 3–7, 2019, EuroPLoP’19, T.B. Sousa, Ed. ACM, 2019, pp. 28:1–28:10.
15. P.G.F. Matsubara, B.F. Gadelha, I. Steinmacher, and T.U. Conte, “Sextamt: A systematic map to navigate the wide seas of factors affecting expert judgment software estimates,” Journal of Systems and Software, 2021, p. 111148. [Online]. https://www.sciencedirect.com/science/article/pii/S0164121221002429
16. F.U. Muram, B. Gallina, and L.G. Rodriguez, “Preventing omission of key evidence fallacy in process-based argumentations,” in 11th International Conference on the Quality of Information and Communications Technology, QUATIC 2018, Coimbra, Portugal, September 4–7, 2018, A. Bertolino, V. Amaral, P. Rupino, and M. Vieira, Eds. IEEE Computer Society, 2018, pp. 65–73.
17. P. Picha, P. Brada, R. Ramsauer, and W. Mauerer, “Towards architect’s activity detection through a common model for project pattern analysis,” in 2017 IEEE International Conference on Software Architecture Workshops, ICSA Workshops 2017, Gothenburg, Sweden, April 5–7, 2017. IEEE Computer Society, 2017, pp. 175–178.
18. P. Picha and P. Brada, “Software process anti-pattern detection in project data,” in Proceedings of the 24th European Conference on Pattern Languages of Programs, EuroPLoP 2019, Irsee, Germany, July 3–7, 2019, EuroPLoP’19, T.B. Sousa, Ed. ACM, 2019, pp. 20:1–20:12.
19. D. Settas, S. Bibi, P. Sfetsos, I. Stamelos, and V.C. Gerogiannis, “Using bayesian belief networks to model software project management antipatterns,” in Fourth International Conference on Software Engineering, Research, Management and Applications (SERA 2006), 9–11 August 2006, Seattle, Washington, USA. IEEE Computer Society, 2006, pp. 117–124.
20. D. Settas and I. Stamelos, “Using ontologies to represent software project management antipatterns,” in Proceedings of the Nineteenth International Conference on Software Engineering & Knowledge Engineering (SEKE’2007), Boston, Massachusetts, USA, July 9–11, 2007. Knowledge Systems Institute Graduate School, 2007, pp. 604–609.
21. M.B. Perkusich, G. Soares, H.O. Almeida, and A. Perkusich, “A procedure to detect problems of processes in software development projects using bayesian networks,” Expert Systems with Applications, Vol. 42, No. 1, 2015, pp. 437–450.
22. N.E. Fenton, W. Marsh, M. Neil, P. Cates, S. Forey et al., “Making resource decisions for software projects,” in 26th International Conference on Software Engineering (ICSE 2004), 23–28 May 2004, Edinburgh, United Kingdom, A. Finkelstein, J. Estublier, and D.S. Rosenblum, Eds. IEEE Computer Society, 2004, pp. 397–406.
23. M. Unterkalmsteiner, T. Gorschek, A.M. Islam, C.K. Cheng, R.B. Permadi et al., “Evaluation and measurement of software process improvement—a systematic literature review,” IEEE Transactions on Software Engineering, Vol. 38, No. 2, 2012, pp. 398–424.
24. J.J.P. Schalken, S. Brinkkemper, and H. van Vliet, “Using linear regression models to analyse the effect of software process improvement,” in Product-Focused Software Process Improvement, 7th International Conference, PROFES 2006, Amsterdam, The Netherlands, June 12–14, 2006, Proceedings, Lecture Notes in Computer Science, J. Münch and M. Vierimaa, Eds., Vol. 4034. Springer, 2006, pp. 234–248.
25. R.K. Yin, Case Study Research: Design and Methods, 5th ed., Applied Social Research Methods. SAGE Publications, 2013.
26. R.W. Scholz and O. Tietje, Embedded Case Study Methods: Integrating Quantitative and Qualitative Knowledge, 1st ed. SAGE Publications, Inc, 2001.
27. T. Shaikhina, D. Lowe, S. Daga, D. Briggs, R. Higgins et al., “Machine learning for predictive modelling based on small data in biomedical engineering,” IFAC-PapersOnLine, Vol. 48, No. 20, 2015, pp. 469–474, 9th IFAC Symposium on Biological and Medical Systems BMS 2015. [Online]. https://www.sciencedirect.com/science/article/pii/S2405896315020765
28. Y. Zhang and C. Ling, “A strategy to apply machine learning to small datasets in materials science,” npj Computational Materials, Vol. 4, No. 1, 5 2018, p. 25.
29. P. Runeson, M. Höst, A. Rainer, and B. Regnell, Case Study Research in Software Engineering – Guidelines and Examples. Wiley, 2012. [Online]. http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118104358.html
30. K.L. Gwet, Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters, 4th ed. Advanced Analytics, Sep. 2014.
31. E. Tüzün, H. Erdogmus, M.T. Baldassarre, M. Felderer, R. Feldt et al., “Ground-truth deficiencies in software engineering: When codifying the past can be counterproductive,” IEEE Software, Vol. 39, No. 3, 2022, pp. 85–95.
32. O. Bousquet and A. Elisseeff, “Stability and generalization,” Journal of Machine Learning Research, Vol. 2, 2002, pp. 499–526. [Online]. http://jmlr.org/papers/v2/bousquet02a.html
33. K. Scott, The unified process explained, 1st ed. Boston, MA: Addison Wesley Professional, 11 2001.
34. P. Kroll and P. Kruchten, The Rational Unified Process Made Easy: A Practitioner’s Guide to the RUP, Addison-Wesley object technology series. Boston, MA: Addison-Wesley Educational, 4 2003.
35. W.R. Shadish, T.D. Cook, and D.T. Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference, 3rd ed. Houghton Mifflin Company, 2002.
36. J.M. Verner, J. Sampson, V. Tosic, N.A.A. Bakar, and B.A. Kitchenham, “Guidelines for industrially-based multiple case studies in software engineering,” in Proceedings of the Third IEEE International Conference on Research Challenges in Information Science, RCIS 2009, Fès, Morocco, 22–24 April 2009, A. Flory and M. Collard, Eds. IEEE, 2009, pp. 313–324.
37. R. Zhu, D. Zeng, and M.R. Kosorok, “Reinforcement learning trees,” Journal of the American Statistical Association, Vol. 110, No. 512, 2015, pp. 1770–1784, pMID:26903687.
38. D. Draheim and L. Pekacki, “Process-centric analytical processing of version control data,” in 6th International Workshop on Principles of Software Evolution (IWPSE 2003), 1–2 September 2003, Helsinki, Finland. IEEE Computer Society, 2003, p. 131.
39. R. Ramsauer, D. Lohmann, and W. Mauerer, “Observing custom software modifications: A quantitative approach of tracking the evolution of patch stacks,” in Proceedings of the 12th International Symposium on Open Collaboration, OpenSym 2016, Berlin, Germany, August 17–19, 2016, A.I. Wasserman, Ed. ACM, 2016, pp. 4:1–4:4.
40. D.A. Tamburri, F. Palomba, A. Serebrenik, and A. Zaidman, “Discovering community patterns in open-source: a systematic approach and its evaluation,” Empirical Software Engineering, Vol. 24, No. 3, 2019, pp. 1369–1417.
41. S.u. Talpová and T. Čtvrtníková, “Scrum anti-patterns, team performance and responsibility,” International Journal of Agile Systems and Management, Vol. 14, No. 1, 2021, p. 170.
42. A. Hachemi, “Software development process modeling with patterns,” in WSSE 2020: The 2nd World Symposium on Software Engineering, Chengdu, China, September 25–27, 2020. ACM, 2020, pp. 37–41.
43. T. Frtala and V. Vranic, “Animating organizational patterns,” in 8th IEEE/ACM International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE 2015, Florence, Italy, May 18, 2015, A. Begel, R. Prikladnicki, Y. Dittrich, C.R.B. de Souza, A. Sarma et al., Eds. IEEE Computer Society, 2015, pp. 8–14.
44. A.H.M. ter Hofstede, C. Ouyang, M.L. Rosa, L. Song, J. Wang et al., “APQL: A process-model query language,” in Asia Pacific Business Process Management – First Asia Pacific Conference, AP-BPM 2013, Beijing, China, August 29–30, 2013. Selected Papers, Lecture Notes in Business Information Processing, M. Song, M.T. Wynn, and J. Liu, Eds., Vol. 159. Springer, 2013, pp. 23–38.
45. J. Roa, E. Reynares, M.L. Caliusco, and P.D. Villarreal, “Towards ontology-based anti-patterns for the verification of business process behavior,” in New Advances in Information Systems and Technologies – Volume 2 [WorldCIST’16, Recife, Pernambuco, Brazil, March 22–24, 2016], Advances in Intelligent Systems and Computing, Á. Rocha, A.M.R. Correia, H. Adeli, L.P. Reis, and M.M. Teixeira, Eds. Springer, 2016, Vol. 445, pp. 665–673.
46. A. Awad, A. Barnawi, A. Elgammal, R.E. Shawi, A. Almalaise et al., “Runtime detection of business process compliance violations: an approach based on anti patterns,” in Proceedings of the 30th Annual ACM Symposium on Applied Computing, Salamanca, Spain, April 13–17, 2015, R.L. Wainwright, J.M. Corchado, A. Bechini, and J. Hong, Eds. ACM, 2015, pp. 1203–1210.
47. T.O.A. Lehtinen, M. Mäntylä, J. Vanhanen, J. Itkonen, and C. Lassenius, “Perceived causes of software project failures – an analysis of their relationships,” Information and Software Technology, Vol. 56, No. 6, 2014, pp. 623–643.
48. L. Rising and N.S. Janoff, “The scrum software development process for small teams,” IEEE Software, Vol. 17, No. 4, 2000, pp. 26–32.
49. P.G. Smith and D.G. Reinertsen, Developing Products in Half the Time: New Rules, New Tools, 2nd ed. Nashville, TN: John Wiley & Sons, 10 1997.
50. F.P. Brooks, Jr, The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition, 2nd ed. Boston, MA: Addison-Wesley Longman, 8 1995.
51. P. Picha, S. Hönel, P. Brada, M. Ericsson, W. Löwe et al., “Process anti-pattern detection in student projects – a case study,” to appear Proceedings of the 27th European Conference on Pattern Languages of Programs, EuroPLoP 2022, Irsee, Germany, July 6–10, 2022, EuroPLoP’22, T.B. Sousa, Ed. ACM, 2022.
52. E.B. Swanson, “The dimensions of maintenance,” in Proceedings of the 2nd International Conference on Software Engineering, San Francisco, California, USA, October 13–15, 1976, R.T. Yeh and C.V. Ramamoorthy, Eds. IEEE Computer Society, 1976, pp. 492–497. [Online]. http://dl.acm.org/citation.cfm?id=807723
53. S. Hönel, M. Ericsson, W. Löwe, and A. Wingkvist, “Using source code density to improve the accuracy of automatic commit classification into maintenance activities,” Journal of Systems and Software, Vol. 168, 2020, p. 110673.
54. D.I.K. Sjøberg, T. Dybå, B.C.D. Anda, and J.E. Hannay, “Building theories in software engineering,” in Guide to Advanced Empirical Software Engineering, F. Shull, J. Singer, and D.I.K. Sjøberg, Eds. Springer, 2008, pp. 312–336.
55. C. Wohlin and A. Rainer, “Is it a case study? – A critical analysis and guidance,” Journal of Systems and Software, Vol. 192, 2022, p. 111395.
56. C. Wohlin, P. Runeson, M. Höst, M.C. Ohlsson, B. Regnell et al., Experimentation in Software Engineering, 1st ed. Springer, 2012.
57. S. Hönel and C. Wohlin, Personal communication, 12 2022, prof. Wohlin recently authored guidelines for correctly classifying studies [55].
58. M.J. Tiedeman, “Post-mortems – methodology and experiences,” IEEE Journal on Selected Areas in Communications, Vol. 8, No. 2, 1990, pp. 176–180.
59. B. Collier, T. DeMarco, and P. Fearey, “A defined process for project post mortem review,” IEEE Software, Vol. 13, No. 4, 1996, pp. 65–72.
60. J. Gerring, Case Study Research: Principles and Practices, 2nd ed., Strategies for Social Inquiry. Cambridge University Press, 2017.
61. K. Petersen and C. Wohlin, “Context in industrial software engineering research,” in Proceedings of the Third International Symposium on Empirical Software Engineering and Measurement, ESEM 2009, October 15–16, 2009, Lake Buena Vista, Florida, USA. IEEE Computer Society, 2009, pp. 401–404.
62. S. Hönel, P. Pícha, P. Brada, L. Rychtarova, and J. Danek, “Detection of the Fire Drill anti-pattern: 15 real-world projects with ground truth, issue-tracking data, source code density, models and code,” 1 2023, The repository for the source code based method is at: https://github.com/MrShoenel/anti-pattern-models.
63. D. Chappell, “What is application lifecycle management?” 12 2008. [Online]. https://web.archive.org/web/20141207012857/http://www.microsoft.com/global/applicationplatform/en/us/RenderingAssets/Whitepapers/WhatisApplicationLifecycleManagement.pdf
64. P. Runeson and M. Höst, “Guidelines for conducting and reporting case study research in software engineering,” Empirical Software Engineering, Vol. 14, No. 2, 2009, pp. 131–164.
65. J. Cohen, “Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit.” Psychological bulletin, Vol. 70, No. 4, 1968, p. 213.
66. K.L. Gwet, “Computing inter-rater reliability and its variance in the presence of high agreement,” British Journal of Mathematical and Statistical Psychology, Vol. 61, No. 1, 2008, pp. 29–48.
67. J.R. Landis and G.G. Koch, “An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers,” Biometrics, Vol. 33, No. 2, 6 1977, pp. 363–374, pMID:884196.
68. D. Klein, “Implementing a general framework for assessing interrater agreement in stata,” The Stata Journal, Vol. 18, No. 4, 2018, pp. 871–901.
69. B.W. Boehm, Software Engineering Economics, 1st ed. Philadelphia, PA: Prentice Hall, Oct. 1981.
70. N.C. Dalkey, “The Delphi Method: An Experimental Study of Group Opinion,” The RAND Corporation, Santa Monica, CA, Tech. Rep., 1969, document Number: RM-5888-PR. [Online]. https://www.rand.org/pubs/research_memoranda/RM5888.html
71. M. Rosenblatt, “Remarks on Some Nonparametric Estimates of a Density Function,” The Annals of Mathematical Statistics, Vol. 27, No. 3, 1956, pp. 832–837, zbMATH:0073.14602, MathSciNet:MR79873.
72. D.M. Endres and J.E. Schindelin, “A new metric for probability distributions,” IEEE Transactions on Information Theory, Vol. 49, No. 7, 2003, pp. 1858–1860. [Online]. https://doi.org/10.1109/TIT.2003.813506
73. B. Hofner, L. Boccuto, and M. Göker, “Controlling false discoveries in high-dimensional situations: boosting with stability selection,” BMC Bioinformatics, Vol. 16, No. 1, 5 2015, p. 144.
74. W.N. Venables and B.D. Ripley, Modern Applied Statistics with S, 4th ed. New York: Springer, 2002. [Online]. http://www.stats.ox.ac.uk/pub/MASS4
75. F. Bertrand and M. Maumy-Bertrand, plsRglm: Partial least squares linear and generalized linear regression for processing incomplete datasets by cross-validation and bootstrap techniques with R, 2018.
76. A. Peters and T. Hothorn, ipred: Improved Predictors, 2019, r package version 0.9-9. [Online]. https://CRAN.R-project.org/package=ipred
77. F. Pukelsheim, “The three sigma rule,” The American Statistician, Vol. 48, No. 2, 1994, pp. 88–91. [Online]. http://www.jstor.org/stable/2684253
78. S. Hönel, “Technical Reports Compilation: Detecting the Fire Drill Anti-pattern Using Source Code and Issue-Tracking Data,” CoRR, Vol. abs/2104.15090, 1 2023.
79. D. Vysochanskij and Y.I. Petunin, “Justification of the 3σ rule for unimodal distributions,” Theory of Probability and Mathematical Statistics, Vol. 21, No. 25-36, 1980.
80. P. Tchébychef, “Des Valeurs Moyennes,” Journal de Mathématiques Pures et Appliquées, Vol. 12, 1867, pp. 177–184, Traduction du Russe par M. N. de Khanikof. [Online]. http://eudml.org/doc/234989
81. G.C. Cawley and N.L.C. Talbot, “On over-fitting in model selection and subsequent selection bias in performance evaluation,” Journal of Machine Learning Research, Vol. 11, 2010, pp. 2079–2107.
82. S. Raudys and A.K. Jain, “Small sample size effects in statistical pattern recognition: Recommendations for practitioners,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 3, 1991, pp. 252–264.
83. S. Varma and R. Simon, “Bias in error estimation when using cross-validation for model selection,” BMC Bioinformatics, Vol. 7, No. 1, 2 2006.
84. A. Vabalas, E. Gowen, E. Poliakoff, and A.J. Casson, “Machine learning algorithm validation with a limited sample size,” PLOS ONE, Vol. 14, No. 11, 11 2019, pp. 1–20.
85. T. Shaikhina, D. Lowe, S. Daga, D. Briggs, R. Higgins et al., “Machine learning for predictive modelling based on small data in biomedical engineering,” IFAC-PapersOnLine, Vol. 48, No. 20, 2015, pp. 469–474.
86. L. Torgo, R.P. Ribeiro, B. Pfahringer, and P. Branco, “SMOTE for regression,” in Progress in Artificial Intelligence – 16th Portuguese Conference on Artificial Intelligence, EPIA 2013, Angra do Heroísmo, Azores, Portugal, September 9–12, 2013. Proceedings, Lecture Notes in Computer Science, L. Correia, L.P. Reis, and J. Cascalho, Eds., Vol. 8154. Springer, 2013, pp. 378–389.
87. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2020. [Online]. https://www.R-project.org/
88. B. Greenwell, B. Boehmke, J. Cunningham, and G. Developers, gbm: Generalized Boosted Regression Models, 2020, r package version 2.1.8. [Online]. https://CRAN.R-project.org/package=gbm
89. A. Liaw and M. Wiener, “Classification and regression by randomforest,” R News, Vol. 2, No. 3, 2002, pp. 18–22. [Online]. https://CRAN.R-project.org/doc/Rnews/
90. A. Karatzoglou, A. Smola, K. Hornik, and A. Zeileis, “kernlab – an s4 package for kernel methods in r,” Journal of Statistical Software, Vol. 11, No. 9, 2004, p. 1–20. [Online]. https://www.jstatsoft.org/index.php/jss/article/view/v011i09
91. P.A. Lachenbruch and M.R. Mickey, “Estimation of error rates in discriminant analysis,” Technometrics, Vol. 10, No. 1, 1968, pp. 1–11. [Online]. http://www.jstor.org/stable/1266219
92. M.J. Kearns and D. Ron, “Algorithmic stability and sanity-check bounds for leave-one-out cross-validation,” Neural Computation, Vol. 11, No. 6, 1999, pp. 1427–1453.
93. J.L. Fleiss, J. Cohen, and B.S. Everitt, “Large sample standard errors of kappa and weighted kappa.” Psychological Bulletin, Vol. 72, No. 5, 11 1969, pp. 323–327.
94. D.V. Cicchetti and S.A. Sparrow, “Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior,” American Journal of Mental Deficiency, Vol. 86, No. 2, 9 1981, pp. 127–137, pMID:7315877.
95. J.L. Fleiss, Statistical Methods for Rates and Proportions, 2nd ed., Probability & Mathematical Statistics S. Nashville, TN: John Wiley & Sons, 5 1981.
96. D.A. Regier, W.E. Narrow, D.E. Clarke, H.C. Kraemer, S.J. Kuramoto et al., “DSM-5 field trials in the united states and canada, part II: test-retest reliability of selected categorical diagnoses,” American Journal of Psychiatry, Vol. 170, No. 1, Jan. 2013, pp. 59–70.
97. A.S. Lee, “A scientific methodology for mis case studies,” MIS Quarterly, Vol. 13, No. 1, 1989, pp. 33–50.