As a distributed object technology, CORBA provides tremendous flexibility for implementing robust enterprise information systems composed of distributed components. In large-scale deployment, these components run in multiple servers located in multiple hosts of different operating systems.
The CORBA architecture facilitates integration of these distributed components and defines a mechanism for obtaining the required object references.
In particular, the CORBA programming model requires that remote object implementations be published so client processes can locate them. To provide this directory capability, OMG defines two specifications: naming service and trader service, both of which specify the interfaces for object references registration, lookup, and management. Many directory service packages are commercially available and generically implemented. But a robust enterprise implementation must meet a number of requirements: scalability, reliability, availability, and more, all of which require intimate understanding of the problem domain. Since the generic implementations of the directory service aren't able to address those requirements, a smart location service is needed to achieve system robustness.
Building a location service, however, consumes time and resources, and can also get in the way of other development. In this article, I outline the steps that allow a location service to be incrementally built, tested, and released into a development environment in baby steps, unintrusively.
To begin, I'll discuss factors that influence the decision for building a location service, then define an implementation plan. To wrap up, I'll discuss a sample application in an enterprise architecture integration (EAI) effort.
When building an enterprise information system, the architecture team is tasked with requirements that describe architecture robustness. System robustness comes in many forms. In general, a good design seeks to balance five essential elements: scalability, reliability, availability, fault-tolerance, and manageability.
Scalability defines the system ability to handle more workload by adding more computing resources in a predictable manner. If the resource requirement increases exponentially as the demand increases, the system is not scalable. Be careful to differentiate scalability from increasing speed. Planning to add more resources to "speed up" transaction processing time is not a design for scalability. Speed enhancement has limits and is successful only in conjunction with improvements in several areas. Doubling the number of CPUs doesn't automatically cut processing time in half.
Reliability is a measurement of resilience to system failures. Mean time between failures (MTBF) is a standard measurement. In many cases, components that are well tested do not break, while the under-tested components do. System errors such as network and I/O also contribute to system failures. Thus a reliable system must handle those conditions gracefully, which closely relates to the topic of fault tolerance. Replication of system resources is a common prescribed solution for fault tolerance.
Availability defines a measurement of online time. System components that run but are unable to handle new requests do not contribute to system availability. For a system to be available, the components must have the ability, or an appearance, to handle new requests.
To support the above, the system frequently expands beyond the scope of the local network. More resources are strategically added at various locations to expedite fulfillment of client requests. A successful expansion helps scale the system, but it creates more points of failure. To support the expansion, enterprise system management tools are often used to monitor and manage software applications and hardware devices.
For an enterprise information system to be robust, it must address these issues. Architecting a smart location service is a crucial step in that direction, because to enable scalability and availability, more servers are started to host replications of remote objects. This raises the issue of load balancing so that requests are distributed evenly among servers. A round-robin scheme provides a basic functionality, but a smart location service helps client processes find the best available remote services. In this way requests are serviced in a timely manner. To achieve this, the location service collects statistical information and acts upon them using predefined algorithms.
Since the location service acts as a hub to monitor and collect vitals from CORBA servers and objects, it can be used to assist a system management tool by activating or deactivating server instances as needed. In addition, a scheme for server migration, deactivating a server in one host and starting it in another, helps load balance computing resources. Restarting servers that hang up increases overall system reliability and fault tolerance. This effort relieves computing resources for other processes to use. All these tasks contribute to meeting the design requirements of building a robust enterprise information system.
CORBA Object References
The OMG IDL is a descriptive language used to define an interface that the client can call and the server's object implementations can provide. These object implementations are activated inside one or more server containers. The servers publish these as object references, which essentially contain the information on how to locate the servant and the signature of the interface or IDL type. Both are needed for the ORB to facilitate the communication link from the client's proxy object to the remote object implementation.
Before making remote invocations, the client process must obtain the object reference. Since this is a necessary step in the process, all ORB products support proprietary mechanisms to manage object references. Most, if not all, provide full implementations of the OMG naming specifications and allow persistence of object references onto backing stores such as LDAP or RDBMS. Few vendors provide trader implementations. The difference between a naming service and trader service is that the latter allows publication by type versus name. Type publishing allows more flexibility because properties can be associated. Nevertheless, the two services serve the same basic goal - to provide a facility for clients to look up an object reference, and for servers to register/deregister object references (see Figure 1).
To handle more workload, a system is deployed with multiple instances of the same type of remote objects. This presents an issue for client processes that use the naming or trading facility. It requires cumbersome coding, in the naming service case, to navigate the naming context hierarchy, and lacks the means for finding the best object reference. Given a choice of multiple object references in one lookup, the client process has to answer these questions:
Furthermore, stored object references do not indicate that the remote server (that hosts the remote object) is running. Another problem is the remote server, because of IDL-type version difference, might not be able to activate the needed servants. It's possible for a client process to receive a "stale" object reference, thus causing a remote exception when the client attempts to narrow or invoke the operations. In the first case, more code is added to the client side to handle exceptions and initiate another lookup. In the versioning problem, the client is unable to perform work and is likely forced to exit. The IDL-type versioning feature is necessary to support incremental system development.
- On which servers are these object references located? Which server is the least busy? Does it have the capacity to handle my requests in a timely manner?
- Is it better to connect to an object reference hosted by a less busy server two hops away or a busier server in the local subnet?
Location Service Architecture
As more code is added to the client side, more administration is needed to make sure clients are using the same lookup policy. It's better to devise a location service to centralize the policy and serve purposes such as verifying object references, collecting server vitals, and collaborating with system management tools. This location service does not have to be built from the ground up. Since the goal is to enhance the directory service capability, it should be designed using existing directory implementations. Numerous free Java and C++ implementations of naming and trader services are already available from organizations such as www.trcinc.com, http://jacorb.inf.fu-berlin.de/ , and www.mico.org.
By inserting a location service, the client process looks up object references via a different route (see Figure 2). Ideally the location service sports the same interface as the directory service (that is, either naming or trader) but doesn't have to. The location service uses an evaluator - a predefined algorithm - to select an object reference from the presorted list. When the client process receives an object reference from the location service, it's assured that the object reference is the best available in the system.
Figure 2 also shows the relative position of the location service compared to Figure 1. Enterprise business services continue to use the directory service of choice for object registration and deregistration. In this particular example, a trader service implementation is selected and runs in its own process space. It can collocate in the same memory space as the location service; thus it appears to run in one single process. Either way, the trader service is accessible from both the client's remote locator agent and the location service. This pattern is useful for incremental development of any enhanced service.
The remote locator running in the client process defines an interface that the client uses to request binding to remote object references. In this configuration, the initial implementation of the remote locator interface enables lookup of the object reference via the trader service. As the location service implementation is completed, it's inserted seamlessly by updating the implementation of the remote locator.
Note that the trader service uses a backing store such as LDAP for storing object references. The location service has a working area, which is done entirely in memory. The working area contains lists of object references that are indexed and updated by the evaluator. If the location service is restarted, the evaluator prioritizes the lists using newly collected system information. This configuration speeds up the update and retrieval of object references in the location service during normal operations.
Location Service Implementation
Large-scale system development using distributed objects is difficult. Risks must be monitored, contained, and mitigated. Especially during the initial cycle of development, an attempt to do too much can negate the progress. A game plan must be developed to make a smooth transition from the initial phase deliveries to deployment.
Implementing a robust location service, therefore, can be a daunting task and should be done in several phases. Each problem domain has unique user and system requirements that define the breakdown. For many implementations, the following guidelines can be used:
- Phase 1: provides the infrastructure to support future development. During this time, the infrastructure team defines and packages a set of classes, and configures an environment to facilitate the registration and lookup of object references. For Java development use the Java Naming Directory Interface (JNDI) if possible, since it allows interoperability of different types of directory service implementations. Instructions should be given so that developers understand javax.naming package usage and the hierarchy the team comes up with. For a non-Java environment, more work is needed to supply the implementation that mimics the javax.naming subset of the JNDI service. During this phase, select a directory service type and find an available implementation to bootstrap the development.
- Phase 2: defines an implementation with minimum features that must be supported by the location service. Statistical information or system vitals such as the number of running threads, memory usage, and processing time need to be classified. The algorithm to manage an object reference is defined according to these vitals. For instance, before returning an object reference, make sure it's ready to handle requests or provide an object reference with the least workload. Keep in mind that the location service complements the existing CORBA directory service.
- Phase 3: The location service is integrated with external services for better system management. System management tools can provide feedback of the general system's health such as network utilization, I/O, and memory usage, which are important vitals for the location service. In addition, the location service can utilize system management tools to launch or shut down server processes in selective hosts (see Figure 3).
Once the groundwork is in place, implementing the location service occurs concurrently with other development tasks. This effort consists of designing an IDL type that supports the CosNaming interface, and the location service engine, which consists of collectors and evaluators. The collectors are responsible for gathering statistical information from running remote processes and possibly from the system management tool. The evaluators act on the vitals by retrieving the registered object references and apply sorting rules. These rules for managing object reference order are unique within the problem domain.
An Example of EAI Use
An enterprise architecture integration (EAI) effort that uses CORBA can benefit from implementation of a smart location service. Consider a scenario in which enterprise data is served by several legacy subsystems (see Figure 4).
- A subsystem that serves data via an ASCII-delimited format using Winsock: In this scenario, a wrapper is created and defined via IDL. The interface returns crucial information (e.g., IP address, port) for the client to connect to the legacy service. In addition, the interface could return meta- data information used to validate request and response messages.
- A subsystem that accesses XML information using HTTP: An adapter interface is created to facilitate data transferring using HTTP protocol. The retrieved data is parsed, categorized, and persisted as needed. The interface provides information for the subsystem to do its work, and is driven by the client
- A subsystem that communicates business data via a file-sweeping approach: Files that need to be processed by the client are located in the inbound directory and the results are stored in the outbound directory. The client does not know the location of the inbound/outbound folders. The wrapper shields the knowledge and allows implementation to access data via predetermined business logic.
In each scenario, the subsystems are wrapped so they become accessible via IIOP. The publishing and lookup of the wrappers are stored in the location service, hence creating a loosely coupled yet highly cohesive mechanism.
Summary: Where Do You Want to Go?
A prerequisite for building large-scale enterprise information systems is an understanding of the requirements and technology used. CORBA defines a programming model to connect disparate systems and components while providing boundless opportunities to customize for system robustness. This opportunity comes with a price because its implementation consumes time and resources. In today's fast-moving business, dealing with system robustness is frequently an afterthought.
This is acceptable when an organization tries to get its footing by delivering a working system. However, effort should occur in parallel to identify an architecture that enables growth. A strategy is needed to make sure a migration path is available. Good software practices and strong understanding of distributed computing issues are essential. But there's no substitute for architecture analysis and planning, as it's crucial for building robust enterprise information systems.
Khanh Chau, an
infrastructure architect with The Technical Resource Connection, Inc., is also an instructor and project lead for The TRC Java Developer boot camp program. He helped design, implement, and deploy a spine infrastructure framework (SIF) architecture for a major national real estate services company, and is currently developing an enterprise e-commerce portal for one of Germany's largest