HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML

In this article I introduce two practical software design techniques that are useful in the development of high-performance, horizontally scalable, and immediately cost-effective enterprise solutions built on J2EE technology.

I'll be using a distributed enterprise application that runs in a J2EE application server. The first technique is in designing the overall system following the external servlet containers (external SCs) architectural approach. This aims to increase the scalability value of an enterprise application and reduce the cost of an overall solution. The other technique is the value object caching mechanism (VO cache) with an integrated JMS-based update tracking facility. The purpose of this is to compensate performance penalties caused by moving presentation logic out of the application server (AS) process, and to increase performance by eliminating unnecessary communication with the underlying business and persistent storage layers. A key feature of the proposed VO cache implementation is the ability to synchronize cache contents with the underlying persistent storage without needing any timeouts or refreshes. Both techniques, combined, will help you to dramatically increase the performance of your enterprise applications.

The modern technique for achieving higher performance is designing an enterprise application that is distributable and horizontally scalable. But sizing an application in the traditional way by employing additional application server instances is costly since you have to procure more expensive licenses. An alternative approach is the external SC architecture that also allows the presentation logic layer to be scaled independently. This technique involves the delegating of presentation data formatting to a number of out-of-process SC instances run on separate physical machines. Since the licensing cost of an SC engine is a lot lower than one of a full-featured application server ­ you could end up saving thousands of dollars by scaling solutions this way.

In many enterprise applications it's typical for the number of read accesses to vastly exceed the number of writes. Usually developers implement some caching mechanism on the client side (or at the presentation tier) of an enterprise application to reduce the load on the underlying system tiers. However, ordinary caches don't guarantee that their content is really consistent with the data in a persistent storage. Instead, cache content is "considered" up-to-date for a limited time and is dropped or refreshed when this period elapses. The VO cache implementation is free of this disadvantage because it has the special built-in update tracking facility. Cache components are designed to be used as a private object on the client side of an enterprise application ­ for example, in an external SC instance or in a fat Java client application. The following is a summary of the benefits offered by an application built by applying both techniques:

  • Data is always kept in sync with the underlying persistent storage.
  • Client applications are provided with instant read-write data access.
  • Separately scaled presentation- and business-logic tiers.
  • Increased performance due to the enterprise bean VOs caching.
  • Increased performance due to the business logic, consolidated into the more compact AS node.
  • Reduced cost of the overall system.

    Target Application Architecture
    First I want to introduce the concept of a target application ­ an enterprise application built according to solid design principles for effective leveraging of both proposed design techniques.

    Number of Isolated SCs at the Front End
    At the highest level of abstraction, the architecture of the target application looks like a number of independent standalone SC instances run on their own physical machines and connecting to a dedicated AS node (single multi-CPU physical machine or cluster) over a network. In this approach, the only component doing real work in the AS node is the EJB container, not the SC. This approach relieves the application server from doing the work of presentation data formatting, thereby saving its CPU resources for effective servicing of EJB components. Thus, all data formatting is delegated to a number of less-expensive external SC engine instances. The number of nodes hosting SCs can be estimated using the following ratio: where DF_web is the average dataflow between the presentation tier and Web clients, and DF_enterprise is the average dataflow between the presentation- and business-logic tiers. Despite having better scalability value in comparison to traditional horizontal scaling, this approach isn't widely used because of one significant disadvantage: the increased number of RMI calls significantly impacts the performance when your system becomes distributed. I'll show how to overcome this by supplying external SCs with an entity bean state caching component.

    Private VO Caches
    As shown in Figure 1, each SC instance holds its own private VO cache component. Since these components are allocated in-process with hosting SCs, and because of the extreme scalability of the presentation tier built with separate SC instances ­ all this results in dramatic overall system performance improvements.

    Figure 4

    Shared O/R Mapping Components
    The only modules that dedicated AS nodes run are the ones that contain O/R mapping and business logic. Mapping components are just the set of entity EJBs that handle the persistence of business domain (DOM) objects and ensure their integrity (cascade operations). The state holder objects of such entity beans will be further referred to as Domain VOs. Mapping EJBs are accessed (directly or by means of Session Façade components) by all kinds of client applications ­ from SC instances to ordinary fat Java clients. In addition, for a VO cache update-tracking subsystem to work properly, AS must have exclusive access to an underlying persistent storage ­ i.e., any data manipulations bypassing the O/R mapping layer are prohibited. It's similar to the common requirement when an AS is configured with commit option A (reusing a pooled EJB state, see J2EE spec for more info) and designated to avoid data inconsistency.

    Caching Attribute VOs and Noncaching VO Lists
    The VO cache implementation mentioned in the article was designed to operate solely on the Domain VOs. Although it's possible to adopt it to handle the Custom VOs (ones built according to the needs of the client application), a discussion on this topic is outside the scope of the article. Another issue to be addressed is the strategy for handling VO lists when the main entity (Title, for example) has associated attributes (Author and Publisher) related to it as one-to-many. There are two ideas involved here: obtaining the list of remote object IDs in one call (Bulk Retrieval Method or Fast-Lane Reader patterns) and further querying the attribute EJBs individually via a special cache-aware Proxy/Factory object. Attributes, which are often entities that don't span large amounts of space in persistent storage, are good candidates for caching. Finally, the client-side VO cache, built according to the principles described, will offer much more efficient caching of EJB states than a built-in AS instance pooling facility, since it eliminates the all unnecessary access to the business tier.

    On-the-Fly Assembling of Presentation Data
    Since the VO cache operates solely on the Domain VOs, the human-readable form of data needs to be assembled on the fly at the presentation tier of an enterprise application ­ just before sending data to the end user (browser).

    VO Cache Reusable Library
    Abstract O/R Mapping Components

    The abstract O/R mapping subsystem is a part of the library in which common interfaces of mapping components are defined, and EJB connection and VO cache lookup code is encapsulated. All the VOs are immutable. Any particular VO instance is in a one-to-one relationship with the corresponding Key instance (may be a composite). Intertier data transfer is made up of interchanging VO instances and of sending JMS messages. Figure 2 illustrates the data flow between major application components.

    Figure 2

    A helper Proxy object was introduced to encapsulate the details of the O/R mapping component and VO cache interaction, hiding them from the client application programmer. A client application doesn't know anything about the presence of a VO cache, i.e., the Proxy acts like an object factory for VO instances. The state of the O/R mapping component may be updated either by accessing the EJB directly or through a Proxy object. Both cases are handled appropriately.

    VO Cache Core
    The VO cache is an ordinary (not-enterprise) JavaBean that implements the Singleton pattern. This ensures that only one object instance resides in the memory of a JVM. The purpose of this component is to hold and manage VO/Key pairs. The Key object is used for cache lookup for a VO, as well as for locating remote EJB components. The Lazy-Load pattern was applied to handle the updates of cached entries. When an "invalidate" message occurs, the corresponding entry (VO instance) is just dropped without retrieving up-to-date data. A standard Java hashtable is employed to hold VO/Key pairs, and the FIFO queue is used to prevent cache overflow.

    JMS-Based Update Tracking Facility
    Each VO cache instance has its own aggregated controller object, which is actually the implementation of a JMS topic listener. Similarly, every O/R mapping component has the aggregated JMS topic publisher, used to notify cache controllers about EJB state change. The only message topic residing inside an AS is used for communication. When the state of an O/R mapping component changes, all cache controllers receive the "invalidate entry" message. To allow multiple listeners and publishers to coexist in a system, the Publish/Subscribe model was applied. Note that JMS messages are always routed from an O/R mapping component to a VO cache.

    Object Diagrams
    Figure 3 shows the objects that make up the reusable VO cache library and their subclasses, which the O/R mapping layer of the sample application is comprised of.

    Figure 3

    The Sample Application
    The design of the sample application conforms to the definition of the target application architecture mentioned earlier. Although the complexity of this application has been reduced for the purpose of this discussion, the design techniques mentioned apply equally to the real-world applications. The O/R mapping layer of the sample application consists of several bean-managed EJB components packaged into a single module. Note that the attribute VOs (Publisher, Author) of a main entity are retrieved from an O/R mapping layer individually, rather than the main entity VOs (Title), which are obtained as the preordered list. Data assembly for an end-user presentation is managed by the JSP pages. The sample application is implemented using the standard set of J2EE APIs without any proprietary AS vendor extensions. It's intended to run in an environment that's comprised of a dedicated JBoss application server node (may be a cluster) that connects to a number of standalone Tomcat instances run in separate processes.

    Performance Test
    The effect of introducing the proposed design techniques was evaluated by comparing the performance values (as average response times) recorded while running sample applications in different configurations. There are three major factors contributing to the overall system performance. First, the penalties created by moving presentation logic into out-of-process SCs. Second, the performance gain from introducing the VO caching. And, third, the performance gain as the result of consolidating business logic into a more compact AS node (either a dedicated physical server or cluster). Since, the third factor is difficult to estimate correctly because it requires special suits and a complex testing environment, I've limited my investigation by only using measurements from the first two. The total count of page retrievals in every test was 700 and the total count of records in the sample database was 1,000/500/100 (as titles/authors/publishers). For both tests JBoss was configured with a commit option A (reusing pooled EJB instance states). Notice that along with reusing pooled EJB states, the introduction of the VO Cache results in a dramatic performance improvement ­ more than 250%.

    Figure 4 illustrates the smoothed average of performance dynamics while retrieving catalogue pages with random numbers. This proves that EJB state caching at the client-side may be much more efficient than delegating to AS facilities. Another thing to note is that cache contents are constantly in sync with an underlying database, meaning that all changes, made by other clients, are reflected instantly. (JMS is fast enough to be used for synchronization purposes at the presentation tier, although it isn't really synchronous.) By downloading source code from below, you can easily reproduce these tests.

    Figure 4


  • Roman, E., et al. (2001). Mastering Enterprise Java-Beans. John Wiley & Sons.
  • Marinescu, F., Roman, E. (2002). EJB Design Patterns. John Wiley & Sons.
  • The J2EE Tutorial: http://java.sun.com/j2ee/download.html#tutorial
  • Java Developers Connection, J2EE Patterns Catalog: http://developer.java.sun.com/developer/ restricted/patterns/J2EEPatternsAtAGlance.html
  • Memoirs of eXtreme Dragon-Slayers, Part 18: www-106.ibm.com/developerworks/ibm/library/i-extreme18
  • JBoss (unofficial) manual: www.jboss.org/online-manual/HTML
  • JBoss-2.4.4 with embedded Tomcat-4.0.1: www.jboss.org/downloads.jsp

    Author Bio
    Mikhail Skorik, a senior Java developer/architect at HPS International, has more than six years of development experience on a variety of platforms, the last three years in developing Web-servicing solutions for e-commerce. Mikhail holds an MS in applied mathematics from the Moscow Institute of Physics and Technology (MIPT). [email protected]

    Source Code For This Article (~ 1.27 KB ~zip format )

    All Rights Reserved
    Copyright ©  2004 SYS-CON Media, Inc.
      E-mail: [email protected]

    Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.