HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML
 

Despite recent high-profile attacks, application developers often consider security something to tack on at the end of a project, and history encourages this approach. It's unusual to see security considered a fundamental design element in a programming language.

But here's where the Java 2 platform deviates radically from the norm ­ security is in fact a crucial design goal, interwoven into the language's core bootstrap mechanisms. Even before a virtual machine loads its first class, it's aware of specific security policies and permissions.

Unfortunately, security policies are one of Java's more confusing elements. Many developers find them outright irritating. To make matters worse, a great deal of printed and online tutorial code ­ even from official sources ­ circumvents security features by granting all permissions to all code. Usually there's some sort of caveat that these sample policies shouldn't be used in real-world applications. This isn't particularly helpful; real-world policy examples remain scarce, and too many applications remain dangerous.

This article exposes Java's treatment of core security. I'll cover how the model operates throughout a class's lifecycle, its internal logic and how it can be extended for smarter code.

Background: Security Fundamentals
Computer science served as a catalyst in the evolution of burglary. All the traditional attacks hold true here but have mutated to fit a new context.

Secrecy attacks represent the most familiar security challenge. Examples include a thief who cracks a mail server and steals messages, and a snoop who records an online shopper's SSL session and eventually recovers the cryptographic key that can decrypt it. It could also include bribing or tricking someone into divulging a password (social engineering and "rubber hose" techniques).

Integrity attacks involve the altering of data. Culprits include an attacker who successfully alters records in a database or intercepts a message from a server and replaces it with a new message before a client receives it ­ a "man-in-the-middle" attack.

"Trojan Horse" or "Trojaned" applications represent a special case of integrity attacks. Named after the device that sealed the fate of Troy, these attacks occur when a person replaces an existing application with a new version or adds some malicious code to a familiar application. Imagine, for example, a scenario in which an attacker replaces your Web browser with a nearly identical browser that secretly sends all your credit card information to some hidden server.

Not all burglaries are committed artfully; some are brute-force muggings. The computer equivalent of a mugging is a "denial-of-service" attack, an attempt to render a server or network unusable. For example, an attacker might flood a Web server with a tremendous number of requests. While attempting to manage this load, the attacked machine or network might be unable to respond to any requests at all, possibly failing altogether and shutting down.

It's worth noting that people don't challenge computer security. Crackers (those who trespass on computer systems) and phreakers (eavesdroppers who monitor telecom network transmissions) get the press, but it's their tools ­ that is, code ­ that do the work and, as application developers, we're concerned with code. Of course, malicious code doesn't come from attackers only; it could amount to poorly written apps that inadvertently compromise system-level resources or hinder the performance of other apps.

Malicious code also includes viruses. Though true viruses are traditionally assembly-language, low-level programs, most modern "viruses" are actually mere macroscripts that function only in specific operating systems or applications.

Depending on the effects of a virus, it may result in a secrecy attack, integrity attack or denial-of-service attack. The virus may be delivered through a Trojaned application or Trojaned e-mail message. The fact that viruses rarely victimize Java applications is due to the language's insistence on a firm policy and permission design.

Java-Centric Security Concerns
Just as Java introduces new approaches to computing, it also introduces new spins on traditional security problems. How do we ensure that an object marshaled across an RMI system isn't victimized by a man-in-the-middle attack? How do we defend a portable Jini-enabled device from a denial-of-service attack? Java's security APIs are designed to be extended to address such issues.

There are many pieces to the Java security puzzle spread across several API subsets. First and foremost is the core security package that consists of classes in the java.security and java.lang packages. All Java applications, including Enterprise JavaBeans, Swing clients and Web applets, are affected by this core architecture. Regardless of whether you delve into cryptography or authentication APIs, and regardless of the scale of your application, the core Policy and Permissions model will affect your Java 2 project even if you stubbornly insist on ignoring security.

The Policy and Permissions model addresses integrity attacks and, to a lesser degree, denial-of-service and secrecy attacks. The reason the model touches on all these attack categories is that it's built on a security truism: the only secure application is one that doesn't run at all. The core security model deals with which applications are capable of running and in what manner they're permitted to function.

Since application developers are personally required to deal with the security model, risky applications can rarely be developed unwittingly. If a method includes a known security risk, it simply won't execute unless some developer acknowledges the risk and specifically grants a permission.

Policy-Based, Permission-Driven Architecture
Generally speaking, a Permission is simply a statement that some "thing" can execute a particular "action" on a particular "target." This notion is common to most multiuser operating systems and platforms: a user (the thing) may have Permission to access a particular file (the target) and read it (the action). Java sticks fairly closely to this definition.

Here's the architecture in a nutshell: specific code (described by a CodeSource) can perform certain actions on certain targets (described by a Permission). Policy objects manage the relationships between CodeSource objects and Permission objects (see Figure 1).

Figure 1
Figure 1:

Permissions concern classes, not objects. They deal with what a piece of code is allowed to do regardless of its instantiations and who's executing it. To make matters even simpler, Permissions are always positive ­ they never deny actions, only grant them.

The relationship between CodeSource and Permissions is encapsulated in a policy and available before any application logic is even considered by a virtual machine. Consider an object's lifecycle (see Figure 2).

Figure 2
Figure 2:

When an object is instantiated, its class must be defined first. Based on the class's CodeSource, it's then linked to a set of Permissions that a policy has prepared for it. The virtual machine is aware of how to construct this policy upon start-up (more on this later). Finally, an instance of the class is created.

When that instance is subsequently acted on through method invocations, a SecurityManager may check its Permissions at runtime, throwing AccessControlExceptions if any class in the stack of executions lacks the necessary Permissions.

This is simplifying things a little. Behind the scene, matters are a bit more complex. For instance, the ClassLoader will elegantly group Permissions in ProtectionDomains and associate a loaded class with a ProtectionDomain instead of directly to a PermissionsCollection. Furthermore, you probably recognize that ClassLoaders are themselves classes loaded by other ClassLoaders. In fact, there's a chain of ClassLoaders leading backwards up a hierarchy that begins at the virtual machine's special, primordial, native ClassLoader, which is completely inaccessible (it's returned as null). And we haven't even mentioned the complexities of checking every class in an execution stack for the proper Permissions at runtime.

As application developers, we usually don't need to concern ourselves with these details. Of more importance, everything in the model is extendable, so we can create custom Permissions, require security checks wherever we like in application code and craft custom policy objects. Moreover, we can easily create different policies for different users, altering end-user functionality without touching the source code itself and without tangling with the SecurityManager.

Cracking Open the Policy Matrix: CodeSource and Permissions
Policy is fairly simple. It links code to Permissions in a more or less key=value format. Its contents, however, are worthy of a closer look.

CodeSource, defined in java.security.CodeSource, identifies class files for security purposes. Don't confuse it with codebase, the property used to describe the root location of an applet or application.

CodeSource characterizes classes by location and by the identity of their creator or deployer. The first trait is straightforward ­ we refer to location using either file or HTTP protocol. The second characteristic is a little trickier. Java code collected in a JAR file can be cryptographically signed by a specific identity. This process allows a virtual machine to verify the application's integrity.

Here's an example: imagine that Alice has created an application and distributed it in the archive "code.jar". Bob downloads code.jar and wants to grant special Permissions to it. But he wants to be sure no one has tampered with or Trojaned the code.jar file, so he doesn't grant Permissions to any file named code.jar but only to code.jar archives that Alice has signed. The authentication mechanism runs behind the scenes, using KeyStores. A good resource for information on signing code and the nature of digital signatures is Java Cryptography by Jonathan Knudsen.

While CodeSource is defined by only two characteristics, Permission objects offer hierarchical variety. The abstract top-level permission object, java.security.Permission, introduces the notion of a target and a series of actions. Its subclasses are AllPermission, FilePermission, SocketPermission and BasicPermission. BasicPermission in turn has a number of subclasses, most of which have no "actions" associated with them.

When you're creating your own Permission objects, you're free to subclass either Permission or BasicPermission. Use of AllPermission should be avoided even for testing code. As its name suggests, AllPermission passes all security checks. Instead of using AllPermission, try assigning your code the specific Permissions it needs from the outset.

A Permission's type will relate to the general function it addresses ­ FilePermission objects relate to file I/O functions. The possible targets associated with a Permission should follow reasonably from the type ­ a FilePermission will have targets that are files. The actions will in turn follow reasonably from the target, so a FilePermission might provide actions of "read" or "write" on a particular file target.

A word about actions ­ not all Permissions have them. In fact, most of BasicPermission's subclasses don't use them. RuntimePermission, for example, lists a number of possible targets but no actions at all. The targets themselves, however, imply an action. For example, the RuntimePermission target "createClassLoader" implies a target of "ClassLoader" with an action of "create." Since there's only one possible action, it's included in the target as a sort of shorthand. This seemingly arbitrary inclusion and exclusion of targets and actions doesn't exactly make Permission usage intuitive. Until you become accustomed to the specific Permissions, refer to the APIs for guidance (a good reference is http://java.sun.com/products/jdk/1.2/docs/guide/security/Permissions.html).

Mastering Policy Files
In the default Java 2 implementation you should store policy information in the same text files in which the VM instantiates policy objects.

If you're interested in creating a new security provider, you could craft an implementation that stores the policy data in a database, a smart card or even an Enterprise JavaBean. For the purposes of this discussion I'll focus on the existing flat file implementation provided by Sun.

Out of the box, Java uses one default file to generate its core policy object. This policy must be available to every virtual machine so its data source will always be in the same place: ${JAVA_HOME}/jre/lib/security/java.policy. You can create additional policy files by following its syntax.

Phrased loosely, the file format simply says "grant" (some code from a specific location ­ this is optional ­ signed by some certificate ­ this is also optional) permission to access a "target" using a specific "action."

The codeBase and SignedBy attributes are optional. If you don't include them, the grant will apply to all code. You can also employ the wildcard "*" symbol, though wildcards succeed only when used in a range denoted by a period "." or when used by themselves. For example, "*" and "a.*" are both valid, but "a*" isn't and neither is "a*b" (see Listing 1).

Once you've written a policy file, you can integrate it in a number of ways. To dynamically link policy files, include the file as a runtime parameter and add the following flag to your java invocation: -Djava.security.policy=[fully-qualified filename]. For instance, the command "java ­Djava.security.policy=/foo/home/myperms.policy Foo" will execute "Foo.class" with a policy instantiated using the file "myperms.policy".

You can also drop the policy file in a user's directory to make it available for all code run by that user. This is a handy tactic for systems administrators who need to assign different Permissions across a user base. This method is possible because the java.security file specifies user directories as appropriate locations for policy files. Of course, this line can also be removed to prevent users from creating such policies.

Finally, you can make your new policy file globally available to a virtual machine by creating an entry for it in the java.security file. This has the disadvantage of requiring a separate installation procedure when code is moved across platforms, but may be useful for granting privileges to applications that are seldom redeployed.

To install a policy file statically in this manner, take a look at the following lines of the java.security file:

policy.url.1=file:${java.home}/lib/security/java.policy
policy.url.2=file:${user.home}/.java.policy

You need to add a similar line. For example, if you have a policy file named myapp.policy located in the "/usr/local" directory, you'd add the following line to the entries in java.security:

policy.url.3=file:/usr/local/myapp.policy

This will inform the virtual machine of your policy object every time Java starts.

A simpler way to create and add policies is to use Java's policytool, a GUI interface to the process. Since policytool hides syntax and usage, it's important not to become overly reliant on it. Policytool is mostly helpful for administrators and nonprogrammers. It's located in the $JAVAHOME/bin directory and can be executed from a command prompt with the command "policytool."

What Happens in a Security Check?
While application developers typically needn't bother with the actual routines involved in Permissions checking, understanding the low-level functions may aid in the security design process. The relevant objects are SecurityManager, AccessController and AccessControllerContext (see Figure 3).

Figure 3
Figure 3:

First, SecurityManager comes into the picture. When a method requires a Permission to properly execute, that method retrieves the active SecurityManager and calls its checkPermission() method. Who decided which core methods in the Java language are protected? That is, who determined that java.io.FileInputStream.read() should ask a SecurityManager before executing?

The language's security designers made these decisions and the existing standard Permissions indicate their choices.

You can protect operations in your own application code by adding the same code block that the Java designers used:

SecurityManager sm = System.getSecurityManager();
If (sm != null) sm.checkPermission(perm);
// if the permission isn't granted, an AccessControlException
// will be thrown.

If you want core Java code to pass more security checks than the designers created, you must develop subclasses and require security checks in their methods. For example, if you decide there's a need to prevent the creation of large strings, subclass string and add the SecurityManager.checkPermission code block to any constructor or method in it you wish to protect. Theoretically, you could force a Permissions check before your application executed any method in the language.

SecurityManager, however, runs interference only for your code. In the current implementation its checkPermission() method does just one thing: it passes the Permission off to AccessController.

AccessController is the true workhorse of the security core. It's a complicated beast, relying on a number of native calls, and it's difficult to do it justice in a few short sentences. Essentially, when checking a Permission, AccessController gets a snapshot of the current execution stack, which is encapsulated in an AccessControllerContext that uses native, VM-specific methods to gather an array of classes that represent the current calling Thread's stack. AccessController then checks the Permissions (in the form of ProtectionDomains) associated with each class in this stack.

If any class ­ even one ­ lacks the Permission being tested, an AccessControlException is thrown. Otherwise the method returns silently.

Higher Powers: The doPrivileged() Block
You can see that eventually security checks equate to whether all classes in a stack analyzed by a particular AccessControlContext are linked to the necessary ProtectionDomain. But there's a way to request that the calling class be tested only for the necessary Permission, not the entire stack. The slightly unwieldy yet useful doPrivileged() block permits such functionality.

Say Admin.class has Permission to read a secret string. User.class does not ­ and should not ­ have this Permission. Yet in certain circumstances a User object needs to be able to execute Admin's sensitive method and get the value of that string. You can permit this without granting User.class access to the value's data source by using the doPrivileged() block, which looks like Listing 2.

Possible uses for such a code block might include changing a password or reading a KeyStore file. The doPrivileged block permits such actions in isolated code snips without requiring developers to grant a broad Permission to an entire application. Since all core Java classes are granted all Permissions, it can be used to execute sensitive methods even if a client application has no Permissions at all. Note: If the doPrivileged block returns a value, it'll always return an object, so it must be cast to the appropriate type. It's obviously dangerous ­ suitable only for short functions ­ but it's an extremely useful tool when used cleverly.

Determining What Permissions Are Needed
Judging by posts to various Java mailing lists, deciding what privileges your code needs is often a frustrating endeavor. This may be particularly true if you're porting jdk1.1 code to the Java 2 platform, which suddenly requires all sorts of Permissions to execute methods that previously worked fine.

There's currently no tool available to automatically generate a policy file for specific code. The best way to tackle the task is to plan with Permissions and policy in mind from the beginning and design accordingly. This approach has the benefit of putting a great deal of AccessControl checking at the designer's disposal so that developers can better customize applications for different use cases.

If a developer misses the security element at design time, he or she can solve the issue through runtime testing. Pay close attention to the thread dump that'll appear when an AccessControlException is thrown. This dump will explicitly report which Permission is missing. Try adding that Permission to the policy file and test the code again. If it succeeds, though, consider the implications of adding the Permission before you commit to it. Consider whether a doPrivileged block or some alternative might provide better security. At the very least, be aware of the risks of any Permission you grant and watch for exploitation of those risks during your program's operation.

Keep in mind that without using doPrivileged(), every class in a thread of execution must be granted the Permission for Java to be happy. If class A calls a method in class B that calls a method in class C that requires a Permission, all three classes must be granted that Permission.

Conclusion
It's impossible to craft completely safe applications. Short of not developing or executing code at all, there's simply no such thing as absolutely secure software. But by understanding the Java 2 security core and integrating it into the design process, you'll find your applications are highly resilient to many common attacks and ensure that your system isn't compromised by malicious Java code.

For further information on Java security, check the official documentation at www.javasoft.com/security. An excellent in-depth study of the architecture is the highly recommended Inside Java 2 Platform Security by Li Gong, the Sun scientist who led its development.

Author Bio
Patrick Sean Neville has produced Web-based applications and Java systems for television companies, advertising agencies and a variety of new media companies. Creator of The Code Studio (www.codestudio.com), he's also the chief Java engineer at Stockback.com.
He can be reached at: [email protected].

	

Listing 1:

grant codeBase "file://home/code" SignedBy "Admin" {


// example with no action
permission java.lang.RuntimePermission
"setFactory";


// two examples with action fields, one
// containing a wildcard
permission java.net.SocketPermission
"*:*", "connect,accept";
permission java.util.PropertyPermis-
sion "user.home", "read";


};


Listing 2:

/*This is a method in Admin.java, which any class may call.*/
public String getValue() {
String result = (String) AccessController.doPrivileged(new
PrivilegedAction() {
            public Object run() {
    // Admin has permission to access this
// method
// even if its calling classes do not
       return readSecretValue();
        }
            });
    return result;
}



 

All Rights Reserved
Copyright ©  2004 SYS-CON Media, Inc.
  E-mail: [email protected]

Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.