«Item type text; Dissertation-Reproduction (electronic) Authors DUNN, THURMAN STANLEY. Publisher The University of Arizona. Rights Copyright © is ...»
Deliberate Misreporting With Lapping. A program that has been manipulated to cause misreporting either fails to apply a ch arge to a perpetrator I s account (the ch arge gets app 1i ed to another account) or credits a perpetrator's account with a payment (the account that should have been credited is not posted). Either way, certain problems are bound to arise. In the firs-::' case, complaints can be expected from those whose accounts now carry unauthorized charges. In the second case, complaints can be expected from those whose accounts were not credited. To avoid this a process of del iberate misposting, correcting the deliberate misposting, and creating another deliberate misposting called lapping is used to continue the fraud. All lapping schemes of any merit call for masterful time management and meticulous record keeping on the part of the perpetrator.
File Modification. Altering programs to effect secret changes in account status is a fairly common programming technique for
computer fraud. Examples of account status changes include:
opening an account for subsequent fraudulent manipulation in order to receive automatic payments (payroll, retirement, unemployment, welfare, etc.); destroying the record of a fraudulent account; inhibition of printing of an account's pastdue status; increases to a credit limit on a credit account so that a greater charge will be authorized.
Fudging Control Totals. This tactic is often combined with other programming schemes. The approach involves processing that occurs without being properly reflected in control totals.
File Alteration Schemes Access To ALive Master Fi leo One fairly common form of fraudulent file alteration is to obtain "access to a live master file" and (using either a program specially written for the purpose, a general retrieval program, or a utility) make surreptitious changes to the file. Changes may include modification of monetary amounts or changes to other data.
Substitution Of Dummied-Up Version For The Real File. This scheme depends upon one of two possible sequences of events.
In either case, the scheme begins with the perpetrator obtaining access to the master file, possibly under the guise of mak ing a copy for use as test data. Then the file is run against a program, either in-house or at a service bureau. The program creates a very similar file, containing only a few modifications. The newly created file is then substituted for the live file and returned to the data library.
Access And Modification Of Transaction Fi les Prior To Processing. Possible fraudulent actions that may be involved in this type of scheme include addition, modification, and deletion of input transactions.
The Risk Assessment Methodology The methodology, by identifying and prioritizing the major threats surrounding automated systems, establishes the framework for optimizing the use of resources in the detection of computer fraud.
The basic model on which the risk assessment methodology is based is shown in Figure 8. Examples of computer fraud deterrents are: fear of imprisonment, high morals and non-lucrative system types. Deterrents do not require the expenditure of resources since they are inherent, if they exist. As the model in Figure 8 shows, if deterrents are adequate to limit the threat of computer fraud to an acceptable level, as determined by threat assessment, there is no need to allocate resources to prevention or detection. If, however, the threat is not acceptable, computer fraud prevention through controls is the next logical check against fraud. The adequacy of prevention is determined through controls analysis. A distinction between existing systems and systems under development is appropriate at this point. If the system is fully developed and operating, particularly if it is a large complex system, it may be very expensive to add system controls. On the other hand, if the system is under development, it may be quite feasible and desirable to add controls based on the controls analysis. In either case, if it is economically feasible to add controls, serious consideration should be given to adding them since prevention is preferable to detection just as deterrence is preferable to prevention. The reason for this is rather obvious. Both controls and detection are expensive and require the expenditure of resources which could be used elsewhere. Thus, if either can be eliminated by avoiding fraud in other ways there will be a gain. If prevention, or the composite of deterrents plus prevention are adequate, then detection is not necessary. If, however, deterrents plus preyention are not adequate, computer fraud detection is needed.
Figure 8. Computer Fraud Detection Model The model is circular since computer fraud detection becomes a deterrent as defined for the model.
Thus, a highly publicized detection capability could decrease or even eliminate computer fraud if it is perceived by would be perpetrators as being effective.
The model is consistent with time, people, and dollar constraints discussed earlier since it limits the allocation of resources for detection, and for prevention only to those levels considered essential. A feature of the model which may not be quite as obvious at this point is its inherent ability to limit total effort to only that level required for a specific system. This should become clear as "risk assessment" is further defined. Finally, the model avoids the flaw inherent in most presentations on the broader subject of computer security of assuming the largest computer systems and making it difficult or impossible to scale down recommended safeguards for the smaller system. The model is equally applicable to the smallest batch oriented system and the largest integrated, real-time, distributed system.
It is rather common knowledge that many systems, large and small, are fully operational today with few, if any, preventive controls built in. Where controls do exist they are often of the "cookbook" variety which are only partially effective in thwarting the computer felon. In a recent special report (Parker 1979), a clear distinction is drawn between the three basic threats with which the broader subject of computer security must deal: (I) natural disasters, e.g., extreme weather conditions; (2) human errors and omissions, e.g., incorrect tape labeling; and (3) intentional human acts such as fraud or sabotage. Parker states that the first two threats are empirically and lend themselves to treatment through careful predictable application of the "cookbook approach II of checklist do's and don Its currently so plentiful in the literature. However, he points out, "cookbook" safeguards are much less effective against intentional acts such as fraud than they are with natural disasters or human errors and omissions.
Finally, in all but the simplest of systems, the overall effectiveness of deterrents and preventive controls may be difficult, if not impossible, to quantify. The research for this dissertation did not reveal any scheme, for example, which could provide some specific level of confidence that fraud would be prevented if certain controls TheFe is a significant element of were implemented and followed.
uncertainty involved even with stringent controls in most systems.
Threat Analysis The purpose of threat analysis is to identify and evaluate the basic threats surrounding systems. The computer fraud manipulation schemes discussed earlier and presented in Figures 5, 6, and 7 provide insight into the types of activities which appear to be most susceptible to fraud. The threat analysis phase combines these manipulation schemes with corresponding perpetrators in the establishment and ranking of systems threats. The analysis uses a matrix approach starting with the blank matrix shown in Figure 9.
Manipulation schemes will be listed across the top of the matrix, to be represented by the vertical columns of the matrix.
SCHEMES P E R P E T R A T o R S Figure 9. Threat Analysis - Blank Matrix Perpetrators of these schemes will be listed down the left side of the matrix and represented by the horizontal rows of the matrix. For the risk assessment in this chapter, the major cases that have been publicized and which were discussed earlier were used in identifying the schemes and perpetrators shown in the threat matrix in Figure 10.
The categories in Figure 10 were reported in Auerbach (1978-79), based on the 'Publici zed cases of computer fraud. The threat assessment
R Computer A Operator T Other o Staff R Outsider (NonS employee) Figure 10. Threat Matrix with Scheme and Perpetrator Types methodology for specific items in Chapter 5 also starts with the empty matrix in Figure 9, but a small group approach is presented for developing the threat matrix from that point, tailored to specific systems.
The perpetrators shown down the far left column of the matrix should be self-explanatory with two exceptions. The first is the distinction between "Data entry/terminal operators" and "Clerk/teller", the first two entries. This distinction is essentially that the clerks and tellers deal directly with customers,suppliers, and others, whereas data entry and terminal operators do not.
The other category which might be somewhat vague is the last The perpetrator is •considered an entry -- "Outsider (non-employee)".
"outsider" if he or she is unknown and could conduct the scheme without specialized access or knowledge.
The next step in the threat analysis is to attempt to produce a ranking of the schemes and perpetrators in order of the relative frequency of occurrence and potential impact similar to the one in the typology for various types of computer systems. For this phase of the analysis, a generalized ranking is developed which is based on publicized case data.
The threat matrix shown in Figure 11 is an expanded version of the one in Figure 10. The schemes and perpetrators remain unchanged, but several items have been added. The far right column entitled "Average Loss ($OOO'S)" has been added to show the average dollar value of computer fraud cases by type of perpetrator. For ex amp le, the average loss in fraud cases perpetrated by data entry/terminal operators was $727,000; by clerk/tellers, $58,000; etc.. The whole numbers appearing in the individual cells of the matrix represent the number of occurrences of fraud involving the intersecting schemes and perpetrators. For example, the upper left cell of the matrix contains the number 9 indicating that there were 9 cases perpetrated by data entry/terminal operators using the "transactions added II manipulation scheme. The decimal numbers in parenthesis in each cell were derived by dividing the whole number in the cell by the total
by the model. The effect i veness of prevent i on is determi ned through the use of "controls analysi S" as indicated in block 5 of the model.
Here a clarification is necessary. In this chapter the generalized threat assessment is. based on actual reported incidents of computer fraud which occurred in spite of deterrents and preventive techniques in existence. Thus, for the general threat assessment in this chapter,
Figure 13. Threat Values - Descending Order blocks 2 through 5 of the model are compressed with controls analysis, in effect, embedded in the assessment.
This is not the case in the specific threat assessment methodology presented in Chapter 5. This alternative methodology requires that the threat analysis and controls analysis be conducted separately in the manner shown in the model in Figure 8. The discussion of controls analysis is deferred until Chapter 5 since it relates directly to the methodology presented there.
In Chapter 3 a threat value was derived for the various combinations of computer fraud manipulation schemes and perpetrators of these schemes. In this chapter, a detection quotient will be developed for each of the threats associated with the threat values shown in Figure 12 of Chapter 3. In Chapter 7 the detection quotients for each threat will be used in describing a computer fraud detection resource optimization model which maximizes the detection capability for a given system within available resources. As indicated previously, in the typical system today it is not feasible to examine every transaction or change in an automated system. In fact, in most large systems, only a very small percentage of transactions or changes may feasibly be examined thoroughly enough to detect fraud if it exists. The optimization model described in Chapter 7, by using the detection quotients explained in this chapter, can ensure an optimum or near optimum allocation of resources to the various threats surrounding a given system. In effect, the model uses individual detection quotients to determine the allocation of resources which will maximize the detection quotient for an entire system.
The detection quotient may be described as a value which measures the effectiveness of computer fraud detection resource allocation. This value, for individual system threats, is the product of three sets of factors. The first is the set of threat values developed in Chapter 3. The second is a set of values which represent, for each of the threat values in Chapter 3 (Figure 12), the probability of detecting at least one occurence of fraud when it occurs at a given level or rate. The third is a set of values representing the converse of the rate of occurrence.
For example, the detection q'uotient for transactions added by data entry/terminal operators would be computed as follows, given the sample conditions. Referring to Figure 12, the threat value for tansactions added by data entry/terminal operators is 82.2. Now assume that it is possible to ensure with a probability of 95 percent that, if fraud occurs at a rate of.1 percent (or.001) in the transactions, at least one occurrence will be detected. Or, conversely, that it is possible to ensure with a probability of 95 percent that, if no occurrences are detected in the sample that 99.9 percent (converse of occurrence rate) of the transactions are fraud free. The detection
quotient for this particular example is computed as follows: