Increasingly, technologists are asked by strategists to state the capability of Java within a distributed component architecture. The larger corporate platform is mixed and the owning, interacting businesses must implement a framework technical architecture in which present and future components can co-exist and change with minimum impact. Larger installations contain data and applications at corporate and departmental levels across a heterogenous computing environment. Technologists, thus, have to articulate some of the values and norms of the business strategist as the business and technology surfaces merge.
This article outlines the typical components in a larger technical architecture framework and explores the contribution Java makes to realizing the key business drivers which underpin it.
Key business drivers
There are several key business drivers which a framework architecture should address.
Stable Technical Environment
It is essential that changes in the underlying technical environment do not adversely impact the stability of the core information systems such that achievement of the key business objectives is threatened. It is unrealistic to expect technology to remain stable for the foreseeable future. Hence, the architecture must support a method of separating business logic from the underlying technology such that both can be maintained independently of one another.
Responsiveness to Changing
Most businesses operate within a very volatile and fiercely competitive environment. It is very important that systems developed within the technical architecture are highly flexible and capable of rapid change. In order to meet this requirement, the architecture should facilitate the ability to easily apply changes to existing systems with the least possible need to develop new, ad hoc applications.
Cost Effective Maintenance
The level of ongoing operational support and maintenance costs for the new strategic systems must be as low as possible. In order to meet this requirement, the architecture should facilitate maximum reusability of code and enable the business to take advantage of object-oriented techniques and methodologies as they develop and mature.
The IT industry is in a constant state of evolution, with innovative technical solutions offering real competitive advantages to businesses that are constantly appearing. It is essential that the technical architecture be flexible enough to be able to integrate these solutions at modest cost as they become available, without compromising existing systems. The way to achieve this objective is to ensure that the architecture adequately defines boundaries between individual technical components and embraces open standards wherever possible such that the components can be replaced relatively easily.
The technical architecture must not be dependent on any one supplier to the extent that there is the potential for that supplier to be able to adversely impact the business in any way. Examples of this would be raising component or service costs to an extreme level, moving away from open standards to more proprietary solutions or by refusing to embrace new technical directions as they emerge.
The Technical Architecture
The primary purpose of a technical architecture is to ensure that IS systems and services are delivered in a manner consistent with the business requirements. Figure 1 illustrates the technical architecture framework.
The framework comprises a set of discrete components as illustrated in the diagram. Each component has a clearly defined function and they interact via formal interfaces. This approach enables appropriate technical products to be selected for each component and, because the interfaces remain consistent, enables new technology to be easily integrated into the framework as it becomes available.
It is widely recognized within the industry that a three-tier approach should be adopted when building client/server applications, whereby there is formal partitioning between the data, application and presentation layers.
Most interactive Information Systems can be split into two broad categories: On Line Transaction Processing Systems (or OLTP), which provide computerized solutions for business processes such as Payroll, Order Processing, etc.; and, On Line Analytical Processing (or OLAP), which provides management information and decision support facilities. A different set of design considerations and enabling technology is required for each category and needs to be catered for separately within the framework.
The framework also makes a distinction between small departmental or workgroup based applications and large enterprise-wide systems. This is important because there is often a requirement for enterprise data to be made available to departmental systems and appropriate enabling technology must be available to meet this requirement.
This section looks at each component in more detail.
This component comprises the corporate databases. A key aspect of the architecture is that all corporate data is stored and managed in a consistent manner. This implies that all information relating to "Customer," for example, is held within the Customer database and that all applications that update Customer details or require Customer-related information would access this single database.
It is important that the business not become too heavily locked into any particular database product. It means that it will be possible to migrate any corporate data to an alternative RDBMS and/or server platform without changing the application code. Therefore, most installations will insist, for example, on the use of Btree only structures, no user-defined datatypes and no stored database procedures, while maintaining a high level of security, systems administration, reliability, easy access by all strategic applications and access to significant bandwidth and wide area network connectivity.
Synchronous Data Access
This component relates to the manner in which OLTP applications synchronously access corporate data and represent the top, data access tier in the three-tier model. The main purpose of this component is to provide a formal interface between the business or application logic and the corporate data. This is achieved by placing all data access logic into separate program modules which can then be "called" interactively by the main application programs.
There are various standards for providing synchronous program-to-program communication across a mixed platform; for example, Java RMI and Remote Procedure Call (which is part of the Open Group's Distributed Computing Environment or DCE). Both allow the developer to issue program calls to remote modules as though the modules were residing on the same platform. Managing the communications protocols and identifying the correct server are all handled by Java or DCE. These facilities, however, are non-transactional in nature. Commitment control is not provided as part of the standard. RPC using Sockets requires the developer to hardcode the location of the target system within the application. Any widespread deployment of Sockets could result in unacceptable management and maintenance overheads.
Asynchronous Data Access
This component relates to the requirement to asynchronously transfer data between applications or from one database to another. The most common requirement is to transfer data to and from central, corporate databases and multiple, distributed servers. This component is equally applicable to both OLTP and OLAP applications.
Synchronous data access requires both source and target systems to be available for communication in a highly interactive manner. In contrast, asynchronous data access implies that the level of interaction is far less and that the target system does not necessarily even have to be available when the source request is initiated and a requirement to provide queuing facilities may be required. A typical example of asynchronous data access is batch file transfer but it can also be a key enabler of event-driven processing between loosely coupled applications. Adoption of a standard architectural element in this area would be a key requirement. The most appropriate solution to this requirement is Message-Orientated Middleware.
The broad requirement for asynchronous data access can be summarized as the ability to push or pull data between any of the server platforms irrespective of location within the wide area network; the ability to send messages between application programs residing on different systems; provision of time or event-based scheduling on any platform; resilience in terms of automatic error recovery and restart; ability to generate error and diagnostic alerts; provision of an API to enable simple application integration; data replication from one database to one or more copy databases; and, the ability to write bespoke code to extract and insert the data and use a file transfer mechanism such as FTP to transfer the extracted data across the network.
OLTP Application Logic
This component refers to all physical code within the application with the exception of code related to either data access or screen presentation, and comprises the physical code within enterprise applications which map onto business rules and processes. The key point to note is that traditional application programs tightly couple presentation, business logic and data access logic within the source code. The three-tier approach dictates that there should be formal interfaces between these tiers. This component relates only to the business logic tier. For example, if the application requires an extremely large database, it may be appropriate to place the data on the mainframe but in order to minimize processing costs, implement the application on a HP-UX server.
Local OLTP Data
This component describes the manner in which local data is stored on distributed, departmental systems. In certain circumstances, there is also a requirement to distribute reference data from central corporate databases into the workgroup environment for local validation purposes.
This component describes the manner in which local data will be accessed from local applications. The component encompasses both synchronous and asynchronous access. As with corporate applications, there should be a formal interface between the local database, data access logic and application logic. The Local Middleware component provides data access logic. This component should reside on the local application server rather than the client.
Local Application Logic
As with OLTP Application Logic, this component refers to all physical code within the application with the exception of code related to either data access or screen presentation.
This component comprises the application logic for local, departmental systems. As with enterprise systems, it relates only to the middle tier of the three-tier model.
User Interface Middleware
This component refers to the formal interface between the application logic and the presentation layer and is common to both local and enterprise-wide applications.
This component provides the middleware which manages the interaction between both corporate and local application logic as well as the presentation layer within the three-tier model.
Unlike previous layers where there is a possibility that adjacent components could reside on the same physical platform, this component will always involve transfer of information across a wide or local area network between the client and server.
This component represents the code that controls how the application communicates with the user via the Graphical User Interface and refers to the technique used to control how the application communicates with the user. This is the bottom tier within the three-tier model. Prior to the advent of the Web, this component simply referred to the application code which handled GUI design and screen I/O. Within the context of Internet and intranet applications, this component embraces the use and deployment of Web servers.
Meta Data, OLAP Data Cleansing, Storage, Restructuring
Meta Data is the term used in OLAP to describe all aspects of the data warehouse environment. It covers areas such as: recording of schedules for data acquisition; cleansing and merging operations; recording of process logic for data quality reporting acquisition, cleansing, preparation, distribution, dependencies and hierarchies; recording of users such as data owners, data editors, end users, security levels; and, recording of infrastructure such as locations of data marts, configurations of user hardware and software. The cleansing component relates to the process of taking data from the post acquisition staging area, validating the relationships between the acquired data and the warehouse as a whole and converting the acquired data into a form suitable for warehousing.
The OLAP storage component defines how cleansed data that is available for OLAP and decision support functions should be stored. The Data Restructuring component relates to the process of preparing stored OLAP data for use via a specific analysis tool. Stored OLAP data is usually held in a de-normalized form. Analysis tools will typically require data to be heavily normalized, summarized and aggregated with a significant use of indexing.
OLAP Analysis Tools
This component defines the specific analysis tools that can be used to manipulate the restructured OLAP data.
Graphical User Interface
This component refers to the common graphical user interface that is to be deployed on all application systems, either traditional windows or Web Browsers, including non graphical interfaces such as hand held devices and automation equipment.
The most difficult aspect of providing security services is coming up with a solution that can integrate application security requirements with those relating to securing access to hardware and network resources. Selection of the most appropriate security infrastructure is heavily dependent upon the technical components chosen for the rest of the architecture.
Directory Services refers to the requirement to register all system resources such as clients, servers, file systems, networking details, printers, etc. within a single directory so that they can be accessed from anywhere within the distributed environment. This requirement is essential if the distributed environment is to be managed effectively. The lack of a single solution in this area would mean that the distributed environment is inherently unstable, particularly when any major component change is implemented.
Java Support for the Technical Architecture Framework
Java is a strategic platform in an environment where different applications are supported by different products on different platforms. The reasons can be summarized as follows:
Interfacing code needs to be written once; it will run on all participating hosts that run Java.
Java can encapsulate legacy systems and enable them within the component framework so they can participate in the new multi-tier strategy. These encapsulated legacy systems can then be incorporated into the maintenance infrastructure; e.g., case tools and IDEs using the JavaBean interface. The smallest Java Bean components can be used across all enterprise components, including legacy application components.
The designers can concentrate on the framework components rather than the regression factors in the implementation since VB is not scaleable and C++ has too much build and test overhead. Standard component libraries for C++ tend to be difficult to learn.
Java supports connectivity across all types of framework components without the complexities of C++. In some cases, components can be removed; some JDBC drivers, for example, do not need middleware products such as comms servers to connect to the database over a network.
Java allows thin client application delivery using a browser and is designed for network delivery of code and data to traditional and more exotic devices.
Major vendors are rewriting core products and services in Java or implementing a Java-enabled interface - in particular, Informix (Data Director for Java), Oracle (database, tools) and Sybase (middleware). Sun (and others) is formalizing a portable enterprise component standard called Enterprise JavaBeansª, which will allow an application to make use of whatever enterprise services happen to exist on the platform in use; it makes sense to use the intelligent work done by others.
It is likely that the vendors who supply different parts of the technical architecture will offer products written in Java or supporting Java connectivity. It makes sense to leverage this investment by major vendors and simplify the interface, instead of understanding how to write a C++ module and interface it with a Java enabled database.
Infrastructure services are frequently implemented on different platforms using different products and technologies, making it difficult to build portable application systems. Java provides a common interface to the underlying infrastructure services, regardless of the actual implementation.
For Synchronous Data Access, Java uses JDBC to interface with servers that perform data access. Vendors are shipping JDBC drivers which eliminate the need for network or comms servers between remote clients or application servers and the database server. This is possible because of the native support in Java for remote procedure call which hides networking functionality. This means there are fewer components to install and maintain. The clear advantage in a Java implementation would be the relative transparency of the remote method call when using RMI. A Java application server can operate with the synchronous data access tier using RMI calls. The data access tier invokes the JDBC API, with or without the relevant bridge. The important point is that what is formally expressed as a rule in the architecture framework is realized but made transparent by the Java platform, which retains the soft link to the implementation detail that can be changed at the lower end without breaking the component interfaces.
It is cheaper to encapsulate legacy applications and databases than to rewrite business logic that doesn't need rewriting. RMI (Remote Method Invocation) runs over the native Java Remote Method Protocol (JRMP) or, in the future, over the industry-standard Internet InterORB Protocol (IIOP).
Non-Java clients can invoke an application using CORBA IDL running over IIOP or a COM/CORBA Internet working service running over IIOP. Java can also interface with non-Java languages and applications in the form of standard application calls.
Regarding OLTP Application Logic, the most flexible application delivery vehicle is the browser which delivers the relevant Java classes on demand, with some locality in the Jar archive that can be sent with the Web page. Clearly, the browser would only want to render the presentation layer and the elementary client-validation logic. Code is loaded by the applet on demand from the network resource, effectively the UI and presentation logic; it is scalable due to the ability to parcel up related code in one transmission, including multimedia objects.
With Java, the code is always up to date; when code is updated, only the network repository needs to be updated; no code will persist on the client.
Java-enabled browsers act as front-end viewers. Mediator software accepts requests from the front end, applies whatever business rules are necessary and then relays the request to the enterprise data systems at the back end. Under this model, application distribution is automatic and requires no additional software beyond a Java-enabled Web browser.
The options for connecting the thin, browser client to the application server are TCP/IP sockets or remote procedure call using RMI.
The ability to Web-enable legacy applications and encapsulate them as a JavaBean allows them to be portable to a number of different server platforms without reprogramming. Beans can communicate with other object models, such as CORBA, OpenDoc, ActiveX or OLE. Individual components can be inserted, updated or deleted with minimal disruption to the application architecture. This multi-tiered approach allows clients to communicate with older or proprietary server applications encapsulated within JavaBeans and allows legacy systems to participate in a more dynamic and integrated development platform. Instead of coding an interface to manage differences, the development IDE which implements Beans can use the encapsulated system.
Applications are thus inherently scalable and can run in a multithreaded, multiprocessing environment, and Beans may be reusable for new applications, reducing development time.
For local middleware, it is easier to manage RPC with RMI than in other languages simply because the Java runtime manages the interfacing. Application servers which have to manage an interface against different versions of Java will use sockets. Again, the complexity of socket connections is supported by a standard API in the language and is easier to use than C/C++. IDEs (Integrated Development Environments) for Java currently support an Ôn-tier' model at design time and some are shipping with application servers, so the elapsed time for managing these interfaces is reduced because the IDE is managing the interface component and integration earlier in the build cycle. This helps both functional (look-and-feel, business function) prototyping and non-functional (back-end) prototyping.
With the PersonalJava API, applications can support a variety of client devices, including telephones, kiosks, smartcards or other Internet-enabled appliances. The exotic nature of these devices is ideal for Java, since it will allow them to participate in the broader framework, interfacing uniformly with local or remote architecture components (legacy and otherwise). In this sense, a technical architecture framework is an abstraction in which its core properties define the secure corporate boundary and express the ability to interact with other participating components in an extranet context.
Note that some JDBC implementations already subvert the need for middleware in those database products which currently have it. For example, servers such as SQL/Net (Oracle) or Ingres/Net will not be required once the vendors release Ônative' JDBC drivers which connect directly to the server.
Regarding Security and Directory Services, the Java language has a built-in security model; it does not support pointers and has bounds checking. Java also has Byte Code Verification - downloaded classes are checksummed, optionally encrypted and signed. No illegal opcodes are allowed and stack integrity (under or overflow) is also managed by the Java VM. The VM also has the Sandbox. The security manager prohibits network classes from overloading Java base classes and prevents access to files/sockets. The user can also define authentication and encryption of Java applets. Java has a standard library called JNDI (Java Naming and Directory Interface) which can be used to declare and monitor components in the architecture.
The explanation above, together with the specifics in the Java platform (language design plus core classes), make a strong case for using Java in component building. This is due to the fact that it is much easier to build and debug than C++ and VB, and also due to how much leverage it has against existing applications, application delivery and integration.
The business metrics of portability, flexibility and separation of business function from contingent technology are clearly met by a Java implementation of a strategic architecture framework. Enterprise JavaBeans is the next exploration of this surface, which includes a transaction-processing element. In a landscape where two-tier client/server applications will be declared legacy applications by 2001, Java is the key enabling technology that will allow the strategist's price/performance ratio to be met.
About the Author
Graham Harrison is a Senior Consultant with Informix and a Sun Certified Java Programmer. He can be contacted at firstname.lastname@example.org