# «Item type text; Dissertation-Reproduction (electronic) Authors DUNN, THURMAN STANLEY. Publisher The University of Arizona. Rights Copyright © is ...»

In the general area of risk analysis there are several techniques in the literature for evaluating the significance of threats once they are identified. However, the basic process tends to follow three steps (Auerbach 1980).

Estimate the frequency of threat events Estimate the cost per occurrence Calculate the expected annual loss for each event If, for example, the estimated frequency for a given threat

$100,000 the expected annual loss for the threat would be calculated as

**follows:**

Average Annual Loss - Estimated Loss Per Occurrence

- Estimated Frequency ~~~rage Annual Loss = $1°~6000 = $2,000 This approach lends itself quite well to the unintentional variety of computer threats such as floods, fires, errors and mechanical failures since these events are empirically predictable with some degree of accuracy. Unfortunately, the approach is not as effective for unintentional acts such as computer fraud or sabotage which are not empirically predictable.

Churchman - Ackoff Method An approach which does not use expected values or probabilities is the Churchman - Ackoff method (R.1. Ackoff and M.W.

Sasieni, 1968). This procedure is based on the premise that, given any two outcomes, a decision-maker can estimate how serious they are in terms of their relative significance. It is applicable only to the analysiS of outcomes that can be expressed in terms of "yes" or "no".

**It is based on the following assumptions:**

1. For the initial ranking, it is assumed that the decisionmaker can make a rough measure of threats. For all possible pairs of threat choices (e.g., T{l) and T(2), the decisionmaker must know whether T(l) is more or less of a threat than T(2)).

2. The decision-maker is also assumed to be able to assign some value to every threat to identify how serious it is. The difference between these values should allow us to make statements such as "one choice is twice as risky as another".

Every choice, therefore, must be assigned a number that represents a measure of its relative threat.

3. If T{l) is more of a threat than T(2), it will rank higher.

If the values assigned to the two threats are equal, the importance of these threats is equal in the mind of the decision-maker.

4. If two threats (T(l) and T(2») have different threat levels, the combined outcomes of these two threats is the sum of the two. This assumption will fail if the two threats are mutually exclusive. It will also fail if the occurrence of one threat implies the occurrence of the other threat.

5. This procedure works best with six to eight threat choices.

The relative significance of threats may be evaluated using the

**Churchman - Ackoff method as follows:**

Have the decision-maker (or group) rank the threats in the 1.

order of their importance to the organization. Referring back to Chapter 2 this ranking should reflect vulnerabilities considering both the likliehood and impact should fraud occur.

Let T(l) represent the greatest threat and T(N) the least threat, where N equals the number of different threats being considered.

2. Assign a value of 1 to T(N) (the least threat) and have the decision-maker assign numbers to the remaining threats in order to identify their relative importance to the organization. In this way, the items that are of the greatest threat will have the highest values.

3. Present the decision-maker with the choices shown in the

**general form in Figure 22 below:**

Start T(1):T(2) + T(3) +... + T(N) T(N-2):T(N-1) + T(N) T(1):T(2) + T(3) +... + T(N-l) T(N-2) T(N-l) T(l):T(2) + T(3) +... + T(N-2) STOP

ST(l) ;s eighteen times the threat of ST(7) ST(5) ;s two times the threat of ST(6), Etc.

The assigned weight values should then be plugged into the series of comparisons and necessary adjustments made. Starting at, the bottom of the list and working toward the top this is accomplished in Figure 23.

= 2 and ST(7) = 1.

ST(6) These values form the basis for calculating threat values for the computer fraud schemes in Figure 21. However, prior to developing these values, the perpetrators identified in Figure 21 will be ranked in the same manner as the schemes were in the analysis above. Assume that the perpetrator threats (PT) from Fi gure 21 have been ranked in the following order where PT(l) represents the greatest threat and PT(7) represents the least threat.

PTe!) = Data Entry/Terminal Operator PT(2) = Officer/Manager PT(3) = Programmer PT(4) = Clerk/Teller PT(S) = Other Staff PT(6) = Outsider (non-employee) PT(7) = Computer Operator Now assume that the series of comparisons depicted in Figure 22

**work out as follows when applied to the perpetrator threats:**

1. PT(2)+PT(3)+PT(4)+PT(5)+PT(6)+PT(7) PT(l)~

2. PT(l)~ PT(2)+PT(3)+PT(4)+PT(S)+PT(6) 3. PT(2)+PT(3)+PT(4)+PT(5) PT(l)~

4. PT(l) PT(2)+PT(3)+PT(4) S. PT(l)~ PT(2)+PT(3)

6. PT(2)~ PT(3)+PT(4)+PT(5)+PT(6)+PT(7)

7. PT(2)~ PT(3)+PT(4)+PT(5)+PT(6)

8. PT(2) PT(3)+PT(4)+PT(5)

9. PT(2)~ PT(3)+PT(4)

10. PT(2)-:: PT(3)

11. PT(3)~ PT(4)+PT(5)+PT(6)+PT(7)

12. PT(3)' PT(4)+PT(5)+PT(6)

13. PT(3)~ PT(4)+PT(5)

14. PT(4)~ PT(5)+PT(6)+PT(7) lS. PT(4)~ PT(5)+PT(6)

16. PT(4)~ PT(5)

17. PT(5)~ PT(6)+PT(7)

18. PT(5)? PT(6)

19. PT(6)~ PT(7)

The threat values in Figure 26 are multiplied by 100 in Figure

27. This is accomplished without changing the meaning of the threat matrix since it is the relative magnitude of the threat values rather than their absolute values which is important. The reasons for multiplying the values in Figure 26 by 100 are: to minimize the number of fractions that must be used in further analysis; and to present the threat values in a form which more closely resembles those in the General Threat Assessment in Chapter 3 (Figure 12).

At this point it should be possible to determine whether the threat level is acceptable for a given system. Referring back to Figure 8, recall that, if the threat level is acceptable, no further analysis is needed. If it is not, then it is necessary to perform Controls Analysis for each specific threat for which an unacceptable threat exists.

Controls Analysis The purpose of Controls Analysis is to identify and evaluate the organizational and systems controls which are being used.

The suggested approach for performing the Controls Analysis is similar to the small group or brainstorming approaches described above for Threat Analysis. The first step is to identify all controls which are in effect for a given system. In accomplishing the first step of Controls Analysis the group might refer to a published list of controls (they are plentiful in the literature) and develop a subset of controls in effect within the system being evaluated. However, such an approach assumes the comprehensiveness of the pub 1i shed 1i st, wh ich may not be an accurate assumption. A preferred approach is for the group to develop a list of controls on their own. When the group, after several iterations, can think of no additional controls, a review of published lists might be beneficial as a vehicle for identifying additional controls. By approaching it in this manner, the analysis is not limited to the contents of published lists which may not be comprehensive for a given system but may be beneficial in augmenting the group effort.

Once the group is satisfied that all significant controls have been identified, the controls should be correlated with the cells in the threat matrix. One method for accomplishing this is to number the controls from 1 to N where N the total number of controls, then = record the numbers of the controls which protect against specific threats in the corresponding matrix cells, similar to Fitzgerald's approach (Fitzgerald 1978). For example, assume that the first control limited access to computer programs to computer programmers only. This control would be given the number 1 and this number would be correlated with each cell in the matrix for which it provides protection.

Referring to Figure 21 it is evident that the number 1 would appear in all cells under the column heading "Program Changes" with the exception of the one intersecting with row entitled "Programmer". This process would be continued until all controls have been exhausted.

The next phase of Controls AnalysiS is the evaluation of the controls which have been correlated with the threats in the matrix cells. Ideally, by reviewing the controls associated with each cell, a numeric confidence level of fraud prevention could be assigned. Thus, by reviewing controls associ ated with matrix cell 1,1 (Transactions Added, Data Entry/Terminal Operator") for example, it would be possible to state with some specific level of confidence that fraud wi 11 be prevented (e.g., "If these controls are used a 95% probability of preventing fraud is assured"). Unfortunately, intentional acts such as computer fraud, are not amenable to such quantification, generally.

However, a small group of experts us i ng an approach such as "Delphi" should be capable of subjectively categorizing the controls within each matrix cell according to their aggregate strength. The categorizations in Figure 28 or a similar set should be adequate for purposes of the Controls Analysis.

Research or analysis often leads to the evaluation of various Typically, the objective of this evaluation is to alternatives.

determine an optimum or near optimum alternative, based on some measurable criteria.

The combinatorial dilemma occurs when, due to the phenomenally large number of possible combinations, it is not practical to evaluate The combinatorial dilemma is well known, however, is them all.

explained below to develop a perspective prior to presenting a solution. Although it may be intuitively apparent that certain combinatorial situations will result in large numbers of combinations, the magnitude may not be readily apparent. Consider for example, the number of possible deals in a game of bridge. Then, given the hands N,E,S, and Wdetermine a partition of the 52 cards having four "cells", each with 13 elements. It is rather apparent that there are a large number of possible deals or combinations in this example. Actually, 13!1~~i3!13! possible deals or combinations. Recalling that there are 28, or approximately N! = ((N)(N-l).•. (l)), this equates to 5.3645.10

Further, each different product at each pass i i!e price resu 1ts in a different contribution to ABC's profits. The pr · ry decision may be restated as "what price should be charged for each product in order to drive demand for each to a level which wi 11 opt- ize the use of ABC's plant as measured by total contribution to profi ~.

Given that the reasonable price ranges for the various products are measured in do 11 ars, there cou 1d be hundreds of -'iff erent prices ~ for each product since the alternative prices could by only a few cents. More than likely, however, the number of prices to be considered for each product could be limited to a more reasonable number, for example 10 to 15 discrete prices. For purposes of illustration, assume that ABC's marketing department has developed a demand curve for each product based on 12 different prices per product.

The objective is to determine that combination of prices and the resulting demand levels for each product which optimizes the use of ABC's plant as measured by the profit potential. To fully evaluate all possible combinations of products and prices would require as many as 12 12, or nine trillion combinations.

Assuming it takes only a few nanoseconds of central processor (CPU) time on a large, high speed computer to calculate the potential profit for each· combination, a total evaluation would require several hours of CPU time. For example, if the evaluation of each alternative took four nanoseconds, approximately ten hours of CPU time would be required. Although ten hours of CPU is certainly more reasonable than the 1.7 trillion years in the bridge example, it still represents an expensive use of computer time.

Consider, however, that an alternative methodology were available which, by evaluating only three thousand combinations, would ensure with a 99+ percent confidence, a solution which, based on the measurable criteria, is equal or superior to 99.8 percent of the possible combinations. Using the.05 second example from above, this methodology would require only 150 seconds of CPU time. Unless there are some strange patterns in the data, a combination in the top.2 percent. would probably ver.Y closely approximate the savings of a total evaluation.

The methodology in this chapter is designed to provide such a capability. The only real limitation is in the case of unusual patterns or outliers in the data. For instance, if in the nine trillion possible combinations facing ABC there were one or two that were considerably more profitable than all of the others, the solution reached by the methodology, although within the top few percentile of possible solutions, might still be a considerable distance from the best solution. This situation parallels the "needle in the haystack ll situation discussed in Chapter 4 on "Discovery Sampling".

If it is suspected that this condition exists, a decision would have to be made as to whether the potential superiority of the outlier solutions would justify the cost of a total evaluation.

In some cases the total evaluation may not be a feasible alternative regardless of the pattern of the data. For example, in the bridge example above, it just isn't feasible to perform- a total evaluation requiring 1.7 trillion years.

A General Solution To The Combinatorial Dilemma A general solution to the combinatorial dilemma will be presented in this chapter and cited in Chapter 7 in The Computer Fraud Detection Resource Optimization Model. The solution is based upon an iterative discovery sampl ing technique developed by Dunn (1971). The approach begins with Ark in IS (1967) Discovery (Exploratory) Sampl ing Recall from Chapter 4 that the purpose of discovery technique.