HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML
 

Many three-tier applications built using various middleware products ultimately fail in production due to a lack of scalability, flexibility or reliability. This can trigger a need to migrate an application from one middleware product to another. In this article we'll discuss a process for porting servers between CORBA and EJB middleware implementations.

Object request brokers and application servers are popular middleware technologies you can readily find in the middle tier of distributed applications. These technologies impact the reliability and scalability of the applications they support since middleware introduces a higher degree of sophistication — and therefore greater complexity.

The arrival of distributed object technology into the mainstream of software development has led to the emergence of CORBA and RMI as standard object communication mechanisms. Moreover, thanks to the growing number of e-business applications being developed for the Web, application servers have gained recent prominence within software projects

Cross-Server Interchangeability
As a way to address component interoperability and interchangeability across servers, the Enterprise JavaBeans specification defines an execution and services framework for server-side Java components. EJB relies on the underlying communication mechanism — typically CORBA or RMI — for exposing a component's public interface. Figure 1 illustrates how EJB components can interoperate with CORBA and Java objects.

Figure 1
Figure 1:

Why Migrate?
The primary advantages of technologies like CORBA, RMI and EJB are interoperability and portability. As the use of these middleware standards continues to grow, requirements for functional richness and scalability help drive decisions about which implementation to choose. Changes in business and technical requirements, as well as vendor offerings, often result in the need to migrate between ORBs or application servers. More to the point, applications built using various middleware products all too often fail due to a lack of scalability, flexibility or reliability. When faced with this situation, the best course of action may be to migrate your application software to an alternative middleware product.

Migration usually involves porting your application from one vendor's middleware implementation to another, or even between implementations provided by the same vendor. Some vendors offer different application servers depending on whether you require robust EJB support for faster development versus CORBA underpinnings for enterprise scalability. In any event, migration raises a number of potential issues for the application components being ported and possibly also raises interoperability issues for components remaining on dissimilar implementations. (We'll deal with some of these issues shortly.)

Migration Process
Now let's explore a process for migrating applications between standard middleware implementations. The aims of this process are to capture baseline test cases and performance metrics, then port the application code — and finally to validate that the port was successful. To quantify the success of a migration effort it's essential to capture test cases and metrics methodically — for example, has the migration indeed resulted in increased scalability? Test automation helps streamline this process, providing a cost-effective way to successfully complete a migration effort and quantify the results.

The migration of an application involves more than just porting source code — it entails careful planning, analysis and validation of results. While CORBA and EJB define interoperability and interchangeability standards, they don't prescribe how your middleware provider may have implemented the underlying infrastructure or how you should architect your application components. So you might very well find that migration involves rearchitecting portions of your application to compensate for differences between the current middleware implementation and the new one.

The migration process consists of three phases: preparation, porting and validation. These phases, illustrated in Figure 2, are described in the next sections.

Figure 2
Figure 2:

Preparation Phase
In preparation for a port, you must first create behavioral test cases and capture performance metrics. These tests and metrics should be set in the context of specific goals and targets for the ported application. For example, a goal may be to introduce automatic load balancing. Or a target may be to improve an application's performance by an order of magnitude.

The steps for the preparation phase are:

  1. Define goals and targets.
  2. Create behavioral test cases.
  3. Capture baseline performance metrics.
  4. Analyze the application architecture and identify potential issues.
1. Define goals and targets.
First we need to know where we're going. Typically, the major reason for migrating is to improve the reliability of an application that's targeted for production use. Increasingly such applications are e-business engines that will experience significant demands in terms of client requests, resource availability, and so on. By defining concrete goals and targets it should be possible to align your application's usage requirements with the capabilities of the target middleware implementation. For instance, does the middleware provide automatic failover or will you have to build this into your application?

2. Create behavioral test cases.
Test cases must be created so that the behavior of the ported application can be validated. Since middle-tier server applications can consist of multiple processes that interact through published APIs, this isn't a traditional regression testing exercise. Rather, test cases must be defined for each of an application's components (see Figure 3).

Figure 3
Figure 3:

The challenge becomes how to accomplish this without having to write static test drivers for each component's API.

The following observations can be made about a middle-tier server:

  • Public interfaces are typically specified in IDL or Java.
  • Objects can be accessed dynamically via standard mechanisms such as CORBA's DII or Java's reflection API.
  • A client program can invoke methods and manipulate attributes of server objects irrespective of the implementation language and physical location of the server.
Test cases are most effectively created using an automated functional testing tool that can interact with object-based servers. (Segue's SilkPilot is such a tool for CORBA and EJB servers.) The general approach is to exploit dynamic invocation facilities, allowing you to connect to one or more servers, view information about live objects within the servers, invoke methods with parameters, and view or modify attributes.

Testing should be done within the context of the application's usage model. A banking application, for example, requires an account to be created before funds can be deposited. When an interactive test cycle is completed, a corresponding test case should be generated and saved. Test cases are run later during the validation phase of the migration effort to ensure that the newly ported application components are functioning properly.

3. Capture baseline performance metrics.
You need to quantify the performance of the original application for comparative analysis during the validation phase. Measurement of an application's performance involves simulating usage models under various loads. It's highly advisable to use an automated load-testing tool — such as Segue's SilkPerformer — to accurately simulate message traffic and measure the capacity and scalability of your server applications (see Figure 4).

Figure 4
Figure 4:

The first step in capturing performance metrics is to record message traffic for a typical set of interactions with the server. You can intercept and record IIOP communication used by ORBs and highly scalable application servers, for instance. Then you can create a load test by scaling up the recorded traffic to represent the anticipated usage volume, such as a thousand banking clients making deposits rather than just the one representative case used to generate the initial traffic.

Data values captured within the recorded traffic should be replaced with randomized values to create a realistic simulation. Each of the thousand simulated banking clients would thus have a unique account number and deposit amount, for instance. Workloads can then be defined in terms of machines in the network generating the workload, number of concurrent clients executed, transaction frequencies and duration of the simulation. Scalability measurements become extremely useful when obtained under various workloads, such as starting a simulation with 20 clients and then adding 10 more every 30 seconds up to a thousand concurrent clients. Performance measurements include the throughput of an application component and the response time as perceived by client applications.

4. Analyze the application architecture and identify potential issues.
The architecture of the existing application must be analyzed and reconciled against the goals and targets for the application's performance when migrated to the new middleware. Decisions in the initial architecture were often influenced by limitations of the middleware originally used. So the migration may include some rearchitecting of the application to remove certain design concessions or workarounds that aren't necessary any longer.

Issues may also arise from the absence of particular features in the target middleware, or differences introduced by an alternate approach to implementing the underlying infrastructure; for example, multithreaded servers versus single-threaded/multiprocess. All potential issues should be identified as early in the migration process as possible.

Some specific architectural issues to consider include:

  • How clients connect to a server: For example, does your middleware implementation provide an API for binding directly to an object? Are you required to use a naming service, factory or other facility?
  • Mechanisms for creating and exposing objects within servers: Are you required to use either a basic object adapter (BOA) or portable object adapter (POA)?
  • Object management: What activation modes are available? How is object lifecycle managed? Does the middleware implement a dedicated connection between each client and your object or does it pool connections?
  • Load balancing: Does the middleware implement a transaction-processing framework? Does it provide some other functionality such as object groups?
  • Threading: Can servers be safely implemented using threads, or does the middleware require single-threaded/multiprocess servers?
  • Nonstandard features: Does your application take advantage of vendor-specific features such as interceptors, locators, loaders or client-side caching? Are you using special features to accomplish "nonstandard" tasks like piggybacking extra data onto a message?
  • Fault resilience: Are you depending on middleware-dependent features such as activation modes, automatic connection reestablishment or a transaction-processing framework?
  • Transaction support: Does your application assume the middleware will handle transaction starts, commits and rollbacks?
  • Process colocation: Are you using special features to colocate clients and servers within a common address space?
  • Callbacks: Do your clients expect callbacks from your servers? How does the middleware support this?

Porting Phase
Having completed the preparation phase, the application source code can now be ported to the target middleware. Based on the conclusions drawn from your analysis during the preparation phase, it might be necessary to make changes to source code. This is quite likely if your migration effort includes taking advantage of features that are available only in the new middleware.

The steps for the porting phase are the same as for any porting effort: modify source code as necessary, recompile on the target middleware and platform, test that the application functions properly and repeat as required. When porting is completed, it's time to move on to the validation phase.

Validation Phase
After the application is operational on the new middleware, you must validate the achievement of your goals and targets. The two steps for this phase are:

  1. Make sure that the application behaves properly.
  2. Measure the application's performance.
1. Make sure that the application behaves properly.
The test cases created during the preparation phase can be used to verify the behavior of the newly ported application. This is an API-level regression test. Each test case should be executed and the results reviewed to make sure that the new server components are responding properly.

2. Measure the application's performance.
The load tests created during the preparation phase can be rerun to generate new performance metrics. These measurements can be compared to the initial baseline metrics and performance targets to provide quantifiable evidence that the application's performance has indeed improved.

Final Thoughts
Middleware standards like CORBA and EJB provide a marvelous basis upon which to design and build three-tier applications. However, choosing the right product for the lifetime of an application is made difficult — even perhaps unlikely — by multiple middleware implementations, each with their own unique design approaches, vendor-specific features and inevitable limitations. If improving the reliability of your CORBA or EJB application requires migrating to a new middleware implementation, the process outlined in this article should help ease the pain.

Author Bio
Todd Scallan is the director of project management for Segue Software's distributed computing products. He holds a BS in electrical engineering from Lehigh University and an MS in computer engineering from Syracuse University. [email protected]

 

All Rights Reserved
Copyright ©  2004 SYS-CON Media, Inc.
  E-mail: [email protected]

Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.