«Item type text; Dissertation-Reproduction (electronic) Authors DUNN, THURMAN STANLEY. Publisher The University of Arizona. Rights Copyright © is ...»
calculate detection quotients using the formula DQ For purposes of illustration, the probabilities of detecting at least. one occurrence and the rates of occurrence for all scheme/perpetrator combinations were set at a constant 90% and.S% respectively. This constraint is lifted in describing the resource optimization model in Chapter 7 where the probabilities and rates of occurrence become vari ab 1es for each scheme/perpetrator combi nat i on in seeking a maximum or near maximum DQ for a given level of resource.
In contrast to the "General Threat Assessment" in Chapter 3, which transcends all organization and systems types, the assessment in this chapter is tailored to fit individual systems within given organizations.
Specific threat assessment will be defined as lithe application of scientific procedures to the study and evaluation of the computer fraud risks surrounding individual automated systems" (Auerbach 1980).
For purposes of this methodology, the term "specific threat assessment" will be used interchangeably with the terms "risk analysis", "threat analysis" and "vulnerability analysis".
Because of the limited resources available to the typical organization to combat computer fraud, threat assessment is vital in allocating these resources effectively. In a recent article (Auerbach 1980), Lobel speculated on the use of risk analysis in the 1980's.
Although risk analysis is still in its infancy, Lobel predicts "major breakthroughs before the end of this decade -- both in improved techniques and greater utilization of risk analysis procedures by computer users". However, even by today's standards, he contends that risk analysis has real merit. The problem is "that many organizations have to be convinced of its potential".
Several techniques were found in the literature for evaluating the threats or risks surrounding systems. Practically all of these techniques were oriented to the broader subject of computer security rather than computer fraud. Many of the risks or threats covered in these techniques are of the "unintentional" variety. Examples of this variety of risk or threat are natural disasters, e.g., extreme weather conditions and errors and omissions, e.g., incorrect tape labeling.
The unintentional variety of risks or threats are empirically predictable which facilitates threat assessment. Intentional acts such as computer fraud or sabotage are generally not empirically predictable and, thus, do not always lend themselves to the same threat assessment techniques as unintentional acts like natural disasters, errors and omissions.
In order to establish a framework for reference between various existing techniques in "risk analysis" and the methodology in this chapter, several of those found in the literature are synopsized below.
Risk Estimation Approach Th i s approach simply requires that systems vulnerabil ities be identified and potential losses associated with these vulnerabil ities estimated without the benefit of any appreciable historical data. The following techniques for performing risk estimation represent the views of several different authors as to how this might be accomplished.
Benefits Approach (Krauss and MacGahan, 1979) In their book entitled "Computer Fraud and Counter-measures Dl, Krauss and MacGahan attempt to quantify the benefits of computer fraud countermeasures. These benefits include risk reduction and other benefits. They suggest the four step approach, as shown in Figure 17.
The first step is to list the specific threats or risks that might materialize. In the sample payroll application shown in Figure 17, the first such risk is having a fraud perpetrated by adding an extra employee to the payroll system.
The second step is to estimate the number of occurrences for each risk on an annual basis. If it is estimated that an incident could happen once every four years, the frequency would be 0.25.
The third step is to estimate the the average dollar loss per incident. An alternative approach is to use a high/average/low dollar estimate with corresponding frequencies of occurrence for each of the three dollar values in Step 2.
The final step is to compute the expected annual losses by multiplying the frequency times the dollar value of an occurrence. The total, then, represents the expected dollar losses from all risks considered. This total of $63,750 in Figure 17 would indicate a maximum to spend on countermeasures according to the authors.
The above process should be accomplished by a group of people, possibly including representatives from the user department, data processing, internal audit, independent audit, security, personnel, and consultants on computer security and controls.
As pointed out by Krauss and MacGahan, the above results are difficult to defend since the factors are developed from guesses, inasmuch as a statistical basis does not generally exist for a given organization. In effect, there is no scientific defense to support the use of the total expected losses calculated in the above fashion in determining the amount to spend on countermeasures.
The National Bureau of Standards (NBS) Approach - (NBS, Special Publication, April, 1980) NBS has developed a general strategy for risk analysis which classifies undesirable computer events in terms of their general effects on computerized data rather than in terms of their ultimate effect such as denial of benefits, or loss of money or resources.
The NBS classification of undesirable events (vulnerabilities that are activated) relates them directly to three general security control objectives for all application systems. These vulnerabilities
and their countering control objectives are:
Modification or destruction of data - Data Integrity Disclosure of data - Data Confidentiality Unavailability of data or system service - ADP Availability Computer fraud would fall into the first category. The last two categories pertain to other subsets in the broader topic of computer security.
The NBS approach is oriented primari ly to the development of to offset vulnerabilities. The approach does not couple control~ vulnerabilities with specific controls. However, they have compiled a long list of vulnerabilities which occur in an application environment.
Six basic control categories are provided by NBS along with the general
problems that each will address. Included are:
Data Validation Consistency and reasonableness checks Data entry Validation Validation during processing Data element dictionary/directory User Identity Verification Authorization Journalling Variance Detection Encryption The NBS approach also emphasizes the placement and use of
appropriate controls at each stage of the system life cycle:
initiation; development; and operation. The NBS approach provides valuable insight and direction into problems, vulnerabilities and controls. However, it does not, nor was it intended to, provide a specific methodology for implementing the approach.
The Matrix Approach (Fitzgerald 1978) The most rigorous of the techniques reviewed, the Matrix Approach, subdivides the data processing function into nine components.
Controls relating to each component are then enumerated and correlated with specific vulnerabil ities. Fitzgerald identifies more than 650 practical controls. The nine data processing categories against which
the controls are analyzed are:
General Organization Input Data Communication Program/Computer Processing Output On-Line Terminal/Distributed Systems Physical Security Data Base System Software For each of these components a matrix is developed which identifies specific controls that address a firml s concerns/exposures for protection of the resources/assets pecul iar to that component of the overall system.
The Matrix Approach does not comprise a methodology but rather a tool for conducting an internal control review. Fitzgerald makes the
following statement with regard to the approach:
liThe nine control matrices do not comprise a methodology on the conduct of an internal control review. Instead, the overall methodology on how to conduct an internal control review is assumed to be already established within the organization. The matrix approach of this book is a tool that works with the diverse methodologies for conducting internal control reviews that are in use by a variety of organizations. II The Matrix Approach provides an excellent medium for identifying and reviewing relevant controls for the protection of resources/assets from general threats. The approach is, however, quite cumbersome and complicated with nine matrices and over 650 controls.
Further, in the final analysis, the approach provides a "yes/noll assessment. Thus, a determination is made that controls either exist or do not exist to protect a given resource or asset from a specific threat. The question sti 11 remains as to how effective the controls are.
A Specific Risk Assessment Methodology The Specific Risk Assessment Methodology for this dissertation differs from other efforts in that it: concentrates solely on computer fraud; considers risk assessment and abatement a resource optimization problem; and provides a means of quantifying its effectiveness.
Specific Threat Analysis As with the General Threat Analysis methodology in Chapter 3, the Specific Threat Analysis uses a matrix approach starting with the blank matrix shown in Figure 18.
Manipulation schemes are listed across the top of the matrix and potential perpetrators are listed down the left side of the matrix.
The matrix size is determined by the number of threats and potential perpetrators identified for a given system. The matrix cells will be discussed in the next section on controls analysis. Consider for a moment what the threat matrix might look like for the simplified system schematic in Figure 19.
Now consider what the matrix might look like for the schematic in Figure 20. It should be apparent that the matrices for the two systems will vary significantly.
SCHEMES P E R P E T R A T o R S Figure 18. Specific Threat Analysis -" Blank Matrix There are several techniques for identifying significant scheme and perpetrator threats for a system. The system might be reviewed by an internal or external audit group who, in turn, fills in the matrix.
The systems design group or computer section could be called on to do the job, or the functional manager might do it. Another technique might be to look at similar systems for which schemes have been identified and assume similar schemes and perpet.rators for the system at hand. Or, a simple review of recent literature on internal controls
for computerized systems might reveal "common" schemes and perpetrators.
Any of these techniques will probably identify most of the threats surrounding a given system with a modest effort, particularly for a relatively simple system such as that portrayed in Figure 19.
However, for the 1arger, more comp 1i cated systems wh i ch cros s several functional 1ines, (e.g., accounting, budget, inventory, etc.) and several technical discipl ines, (e.g., computer hardware, computer software, communications, engineering, etc.) a more sophisticated approach might be required.
A better approach is to form a group made up of an expert or
Figure 20. On-Line System experts from each of the primary areas involved with the express objective of identifying the schemes and potential perpetrators for a given system.
For example, assume that the system in Figure 20 is an integrated accounting inventory and disbursing system. In this case
the group should include experts from each of the following areas:
accounting; inventory control; paying and collecting; computer hardware; communications; systems design; systems analysis; computer programming and internal review or auditing. One individual might be an expert in more than one area (e.g., systems analysis, systems design and computer programming).
Using a good structured approach, such a group should be able to identify the schemes and potential perpetrators for the system with a high degree of accuracy within just a few hours.
A recommended approach for the threat analysis is the Delphi method. This method is characterized as a technique for structuring a group communication process so that the group of individuals, as a whole, can effectively deal with a complex problem.
Although there ari:! various approaches to Delphi, it usually involves four phases (Linstone and Turoff 1975). The first phase involves exploration of the subject under discussions. Each individual contributes additional information he or she feels is pertinent to the issue. In the second phase an understanding is reached of how the group views the issue (i.e., where the members agree or disagree and what they mean by relative terms such as importance, desirability, or I feasibility). If significant disagreement exists, then the disagreement is explored in the third phase to bring out the underlying reasons for the difference and possibly to evaluate them. The last phase is a final evaluation, when all previously gathered information has been initially analyzed and the evaluations have been fed back for consideration and modification, if necessary.
The Delphi approach lends itself well to problems for which there are no precise analytical techniques but which can benefit from subjective judgements on a collective basis. Identification of schemes and potential perpetrators in automated systems of varying size., and complexity generally fits this description.
The group monitor should be familiar with the Delphi method and, preferably have some experience in administering it. If such a person is not available it would probably be better not to attempt the Delphi approach, but rather to opt for a less structured "brainstorming" approach. Although the latter may not be as effective as a well run Delphi, it would probably be superior to a poorly run Delphi.
For purposes of illustration, assume that the schemes and perpetrators in Figure 21 have been identified for a given system.