When a majority of all system processing was done on legacy systems, information systems audit professionals recommended the protection of these systems largely through physical security measures. By locating the data center either on the top floor of the building or in the basement with secured points of entry and exit, by installing a swipe card system and by regularly reviewing its access logs, the facility and its processing were protected from intrusions. Threats were largely internal Ð posed by the disgruntled employee attempting to sabotage the last program he had worked on prior to his departure, or by the opportunistic system operator, hoping to pilfer a copy of a customer list to sell to a competitor.
Those were the "good ole days." Now, information systems audit professionals wish that the problems were so simple. With the growth of the World Wide Web, nearly every business has the ability to reach every other business or individual through a computer interface. No longer, by just locking the doors, can an organization keep out the intruder.
In this environment, Java, one of the most promising products for application development, also presents some of the greatest risks. With its ability to display pictures, animate objects and provide sound, Java is well suited for the creation of attention-grabbing sites. Architecturally, the product is designed to execute in a virtual computer by running simulated software in a real computer. This virtual machine is isolated from the real computer and confined to a protected area called the Java Sandbox. In spite of this feature, the government and the media have reported a series of attacks in which hackers have used Java to invade Internet locations. Some hackers have used "holes in the sandbox" to access resources. Hostile applets have impersonated trusted code and gotten access to external resources. While developers at Sun Microsystems, Netscape and Microsoft have recognized and corrected many of these problems, hackers still find new holes. Most recently, applets have been used to perpetrate an attack called the man-in-the-middle, involving both tampering and spoofing. Fake sites are substituted for real and victims are tricked into sending security information, such as user identification, to the intruders.
Why is this happening in the Java environment? Because Java uses an "open architecture." The expectation is that no breaks in security will happen, even when the product's entire operating specification is in the public domain. For its security model to succeed, many elements within Java must work perfectly. The bytecode verifier, classloader and Security Manager must interoperate flawlessly; otherwise, the entire security model is subverted. While stringent testing and public exposure can minimize errors, current technology cannot rigorously prove that the overall Java software (28,000+ lines of code) is error-free.
Because of this level of complexity, there are many paths to subvert the security model. Those that pose the most concern are differences between the Java language and the bytecode semantics, deficiencies in the design of the language and the bytecode format, the lack of audit traits, and the inability of the user to control the Java applet once it is downloaded. A number of compilers in different languages (C or Ada, for example) can output bytecode that looks like Java bytecode to the verifier, but the bytecode produced by these compilers is unlikely to follow all of the Java language restrictions and the bytecode verifier cannot catch all violations. Similarly, the Java language design has some features that weaken the security model, the most significant being that the Java I/O classes are made public. Java also does not provide a standard mechanism for automatically producing audit traits, used to assess the accuracy and integrity of system processing. Finally, the user lacks control over the Java applet once it is downloaded into the local system.
Language developers, systems implementers and companies doing business on the World Wide Web must realize the risks and rewards of using languages like Java. In this context, it is the responsibility of information systems audit professionals (many of whom are members of the Information Systems Audit and Control Association with the professional designation of Certified Information Systems Auditor) to sound the alarm. Information systems auditors encourage language developers to fix the flaws and plug holes, inform management of potential threats to corporate resources when they do business in the World Wide Web's lucrative marketplace; and evaluate the work of system implementers in this "risky" new environment. Only then can the process of securing the World Wide Web begin.
About the Authors
Linda Garceau, CPA, DBA, is an Associate Professor of Accounting at Cleveland State University. She is a member of the Information Systems Audit and Control Association. She can be reached at [email protected]
Victor Matos, Ph.D. is an Associate Professor of Computer and Information Science at Cleveland State University. He can be reached at [email protected]