«Software Fault Reporting Processes in Business-Critical Systems Jon Arvid Børretzen Doctoral Thesis Submitted for the partial fulfilment of the ...»
Another issue of data repositories is the ease of which data can be extracted for analysis.
An example is from O1, where the researchers had to go to a great deal of effort to convert the fault data into a form that could be analyzed. In O3, the fault reports could only be accessed for analysis by printing hardcopies of the reports, which in turn had to be scanned and converted into data that could be analyzed. To be able to support process analysis in an efficient manner, the availability and form of the fault repositories should be in a standard and well kept form.
5. Discussion and conclusion We have presented an overview of studies performed concerning fault reports, and shown the type of information that exists and is lacking from such reports.
What we have learnt from the studies of the fault report repositories of these organizations is that the data is in some cases under-reported, and in most cases underanalyzed. By including some of the information that the organization already has, more focused analyses could be made possible. For instance, specific information about fault location and fault correction effort is generally not reported even though this information is easy to register. One possibility is to introduce a standard for fault reporting, where the most important and useful fault information is mandatory.
A reasonable approach to improving fault reporting and using fault reports as a support for process improvement is to start by being pragmatic. At first, use the readily available data that has already been collected, and in time change the amount and type of data that is collected through development and testing to tune this process.
We have learnt that the effort spent by external researchers to produce useful results based on the available data is quite small compared to the collective effort spent by developers recording this data. This shows that very little effort may give substantial effects for many software developing organizations.
Finally, there are two main points we want to convey as a result of the studies we have
done in these organizations:
• It is important to be able to approach the subject of fault data analysis with a bottom-up approach, at least in early phases of such research and analysis initiatives. The data is readily available, the work that has to be performed is designing and performing a study of these data.
• Much of the recorded fault data is of poor quality. This is most likely because of the lack of interest in use of the data.
References [Bas94] Basili, V.R., Calidiera, G., Rombach, H.D.: Goal Question Metric Paradigm.
In: Marciniak, J.J. (ed.): Encyclopaedia of Software Engineering, pp. 528-532, Wiley, New York, 1994.
[Bor06] Børretzen, J.A., Conradi, R.: Results and Experiences From an Empirical Study of Fault Reports in Industrial Projects. Proceedings of the 7th International Conference on Product Focused Software Process Improvement (PROFES'2006), pp. 389-394, Amsterdam, 12-14 June 2006.
[Bor07] Børretzen, J.A., Dyre-Hansen, J.: Investigating the Software Fault Profile of Industrial Projects to Determine Process Improvement Areas: An Empirical Study.
Proceedings of the European Systems & Software Process Improvement and Innovation Conference 2007 (EuroSPI’07), pp. 212-223, Potsdam, Germany, 26-28 September 2007.
[Con99] Conradi, R., Marjara, A.S., Skåtevik, B.: An Empirical Study of Inspection and Testing Data at Ericsson. Proceedings of the International Conference on Product Focused Software Process Improvement (PROFES'99), p. 263-284, Oulu, Finland, 22June 1999.
[Chil92] R. Chillarege; I.S. Bhandari; J.K. Chaar; M.J. Halliday; D.S. Moebus; B.K.
Ray; M.-Y. Wong, “Orthogonal defect classification-a concept for in-process measurements”, IEEE Transactions on Software Engineering, Volume 18, Issue 11, Nov. 1992 Page(s):943 - 956 [Gra92] Grady, R.: Practical Software Metrics for Project Management and Process Improvement, Prentice Hall, 1992.
[Gen] The Gentoo linux project, available from: http://www.gentoo.org/ IEEE 1044] IEEE Standard Classification for Software Anomalies, IEEE Std 1044December 2, 1993.
[Jør98] Jørgensen, M., Sjøberg, D.I.K., Conradi R.: Reuse of software development experience at Telenor Telecom Software. In Proceedings of the European Software Process Improvement Conference (EuroSPI'98), pp. 10.19-10.31, Gothenburg, Sweden, 16-18 November 1998.
[Moh04] Mohagheghi, P., Conradi, R.: Exploring Industrial Data Repositories: Where Software Development Approaches Meet. In Proceedings of the 8th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE’04), pp. 61-77, Oslo, Norway, 15 June 2004.
[Moh06] P. Mohagheghi, P., Conradi, P., Børretzen, J.A.: Revisiting the Problem of Using Problem Reports for Quality Assessment. Proceedings of the 4th Workshop on Software Quality, held at ICSE'06, pp. 45-50, Shanghai, 21 May 2006.
[Zel98] Zelkowitz, M.V., Wallace, D.R.: Experimental models for validating technology. IEEE Computer, (31)5, pp. 23-31, May 1998.
Appendix B: Interview guide Questions for Test Managers Background
1. Which responsibilities do you have in the organization?
2. How long have you been working in the company?
3. What was your involvement in the project under study?
4. Are you still involved in work with this project?
On the study results
1. The results from our study (both on the organization in general and this project) shows that many faults are of a character that points to them having been introduced in specification and design phases. How does this compare to your impression of faults that are found in your projects?
2. How do you feel that the analysis results for this project compares to your experience of the project?
3. What do you think about the fault categorization scheme we have used, based on ODC?
On the organization’s own measurements and results
1. The organization uses its own way of categorizing faults today, how do you think this works?
2. Some results we have received from the organization indicates where in the development process faults have been introduced and where they have been discovered, does your project report this type of information?
3. How do you separate design faults and implementation faults, when fault reporting is concerned? Do design faults sometimes get reported as change requests?
The quality system
1. What is the fault reporting process like in your organization, and who is responsible for quality?
2. Which tools do you use in fault reporting? Are they the same as in change request reporting?
3. What is the fault correction process like?
4. How much effort does it take to register a fault report? Do you think this task could or should be simplified?
5. Do the reporters of a fault have the same access to system and information as the ones who are going to correct it?
6. Do you think that all the necessary information is accessible when reporting a fault?
7. Do you think that all the necessary information is accessible when correcting a fault?
8. Is the fault reporting in any way used as a basis for process improvement, or is it only used as a fault reporting log for faults that are to be corrected.
9. Do you register information about hours of effort for fault finding and correction? This is relevant towards knowing which faults that requires the most resources.
10. Do you think the tool support for fault reporting is good enough?
Fault reports: Available information Amount of information, correct fields, number of fields
1. Do you think the fields that are used in the fault reporting system are sufficient?
2. Are there any extraneous fields that are not used or that is used without further use of that information?
3. Do you think that any fields are missing?
4. Do you have any information about fault location? In some projects you use the field “Testobjekt”, does this describe functional modules or structural modules which can be linked to code?
5. Do you have the necessary information available to tell which components that are involved in a fault correction, or is this implicit knowledge only the developers have?
6. Is it possible to later use the fault reporting tool to look for which components/code parts that have been involved in a fault correction? For example to be able to find which components that have the most severe faults and so on.
Feedback from fault reporting
1. Do you have any sort of feedback to the developers based on what you find in your quality system?
2. How do you think feedback from what is being done can be used for improvement?
3. Have there been any changes in terms of technical issues, development processes or that of being a systems developer for you, based on what is being uncovered as faults during development of your systems?
1. Do you think the organization is willing to change the reporting routines, with respect to adding information for use in analysis (or change in order to increase preciseness/correctness of the information)?
2. Do you think such changes would be useful in order to improve product quality?
3. How much effort and which actions do you think the company initiates in order