HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML
 

I wish I had a nickel for each time I've had to explain to a new vendor or professional acquaintance that, just because I have the word "Bank" in my company's name - NationsBank -I don't spend my time dealing with huge, monolithic, batch-processing systems. (Sure, we have these systems. And yes, they're still in use.) However, there are large groups within our bank whose purpose is to explore the leading edge of technology. Our work centers on incorporating this technology, both tactically and strategically, in production systems. As most of these systems are for internal use, few people see them unless they work with us.

Evolution of the Bank's Information Technology Departmental Structure
While the details of the systems we develop are confidential (they're part of our competitive business advantage), I'd like to share with you our experiences concerning architecture, tooling and how our development environment has evolved. To do this properly, I first have to explain who we are within the bank structure, how we started and how we've evolved. Next I'll examine how our architecture has evolved into its current form. Finally, I'll discuss the tooling we use to develop this architecture and how I envision it must evolve from today. Please remember that this is a work in progress. What I'm describing here is the culmination of years of work and is presented with a deliberately blind eye to any missteps along the path to here.

The Global Finance Software Engineering Group supports the Global Finance (GF) section of NationsBank. NationsBank is divided into two sections.General Bank is the section of the bank you and I use for our checking and saving accounts or to get and pay a car loan. General Bank is the largest part of NationsBank.

Global Finance is the second section. As large corporations, say $250,000,000 in sales, have different needs from their bankers, Global Finance has a distinctly different set of services. These needs dictate different performance requirements - and usually much more exposure to the external customer. Consequently, Global Finance has chosen to use technology to leverage an advantage in this arena.

My group, IS Tools, is part of the Global Finance Software Engineering Group. We evolved from the 1993 acquisition of Chicago Research and Trading (CRT) by NationsBank. Between 1993 and 1996 NationsBank explored this new acquisition. In addition to the "derivatives" (specialized combinations of options, futures and indexes that can be purchased and traded) knowledge the bank had obtained, CRT had developed and refined a method of high-tech rapid development to support derivative trading. NationsBank, realizing this was a potential competitive advantage, asked the CRT group to help implement their rapid application development (RAD) process within Global Finance as a whole.

To facilitate this, the head of IT from CRT asked to join NationsBank Services, Inc., to support Global Finance as the head of software engineering. As no group within the Services company dealt with development tools at that time, we, the IS Tools Group, also moved to support Global Finance. The manager of IS Tools then became the head of a team named Technical Architecture and a manager of IS Tools in Chicago was selected from the Tools group.

We then created a tool group, with a local manager of IS tools, for each of our other main development areas, Charlotte, North Carolina, and Dallas. This provides a more locally accessible point of contact for the developers in each region. The three managers report to the Technical Architecture Team Lead, who in turn reports to the head of software engineering. Each local manager supplies support, specialized expertise and, indirectly, training for each of the supported tools, and vendor contact and contract support coordination.

Evolution of the IT Architecture
This arrangement shows our appreciation of the significance of architecture, and the infrastructure to deploy it, within the Global Finance section of the bank. It also underscores the importance of the use of technology in lowering our costs and increasing our efficiency to our customers, the developers who support the decision makers within Global Finance.

Now that you know our position within the bank, and how we evolved here, let's discuss the evolution of our architecture, and what tools and technologies we've used to accomplish this evolution.

Over the years we've watched our architecture change from the monolithic to traditional client/server. When we lost power with traditional client/server, we moved to light client/server. Light client/server has progressed to the three-tier architecture (a heavily misused term). And we're finally moving to the fully distributed n-tier.

The monolithic is often forgotten in our evolution discussions. It formed the basis for all our batch programming and exists (mostly as "legacy" code) for many purposes even today. It still fills many purposes for which it is well suited (most of us are paid by such a program). As this technology proved itself to be brittle and hard to maintain, it served to launch us into the next phase - traditional client/server.

In traditional client/server each delivered function has its own client and its own server. If the organization is really astute, there is a set of standards that defines the user's interaction with the client, i.e., menu structure, what mouse clicks do, drag and drop, and so on. Hopefully this group also followed the standard that the OS developers used. (This assumes that the OS that clients were developed for HAD a standard at the time the clients were developed.) The second best is that the interfaces are REALLY different. If they're actually only close, you usually want to retrofit your clients with the OS vendor's system, no matter the expense.

An Outgrowth from Traditional Client/Server
We began to outgrow the traditional client/server when the phenomenon I call "desktop implosion" occurred. When you begin to get more and more functionality on the desktop, the number and interaction of clients begins to balloon. As the ballooning proceeds, the density of programs eventually reaches critical mass and the desktop becomes a black hole, sucking the entire content of the office (and the hapless user) into it. (Actually, the number of programs and interactions destabilizes the desk and becomes a nightmare to update, but the black hole metaphor is much more graphic.)

Client/server does, however, have some advantages. We can tune all of our server applications for maximum performance. We can set up special auditing, tracking and accounting information within each of the servers. Against that we have a number of disadvantages.

Disadvantages of Client/Server
The first disadvantage is the deployment problem. When the crews paint the Golden Gate Bridge, they start on one end and paint toward the other. When they reach the other end, the beginning is peeling so they start all over again. This continues ad infinitum. If you imagine that they change color each time they run out of paint, we have an analog to our deployment problem. At no time will they have a stable color for their bridge, just as we won't have a stable configuration on our workstation or servers when we reach the critical number of applications.

Second, we have little reuse of software except within libraries. These are generally developed for one application and then "upgraded" for future inclusion. While this reuse is minimal, the practice is on the right path. It's usually less effective as the number of platforms increases.

A Need for an N-Tier Approach
Our first attempt to alleviate the traditional client/server problems is to convert to the light client/server. We move the client code to the server and combine it with the server code. We use a "general client" with a "glue" language to keep the user interface on the workstation. Our general client is a browser. The glue language is HTML. This setup is what some people call "three-tier" with the database forming the third tier. As we'll see, this usage has confused three-tier with the simpler light client/server.

We have now alleviated the deployment problem. We deploy to a server (or server farm) that is significantly simpler than a large number of desktops. This improves our scalability problem. We can still collect our audit and tracking data, but now we rely on an authentication method to really ID the user (either the CGI type authentication or some "single sign-on" technique). We do need an investment in the infrastructure because we establish an intranet (using either Web technology or our own) to create the link. (We did some of this in traditional client/server, but it increases with the utilization increase of light client/server.)

Multiple Platforms Come into the Picture
Our downside here is that we may have to design for multiple platforms and browsers. When we reached this phase we had multiple versions of multiple browsers on two different platforms. Since placement is difficult without the same resolution on each desktop, to say nothing about different browsers, our testing time increased dramatically. We alleviated this problem by specifying the kind and version of browser on each desk.

We still have minimal reuse of this model. In addition, as HTML is stateless and not really designed for this purpose, we've used JavaScript or VBScript for any user feedback. Not using these techniques results in the "send it and find out how it breaks" pages we've all come to hate. As this method requires an HTML generator (or generation logic), we have to learn these tools. This results in a flatter productivity versus time curve for the developers entering into this design model.

While this technique is better than the traditional, we still eventually run into a complexity issue. The new model addresses the deployment problem but does it at the expense of the productivity curve. To alleviate this problem we have to address the reuse problem and find a way to blend a number of our functions. The blend allows us to add functionality by adding only the changes necessary to create that new functionality.

Implementation of a Three-Tier Architecture
This blend becomes our three-tier architecture. The three tiers we refer to here are the presentation, server of services and the data layers. The presentation layer is our interface with the user. It gets what information the other layers need and returns the results they send to us. The server of services provides us standard services - security, transaction integrity and the like - and our own special services -such as our business rules - to provide us with a way to handle our information. The data layer provides us with our data interfaces to data objects (OODBMS), relational databases and non traditional data sources, such as real-time feeds or e-mail.

In this model we find the "Goldilocks" client, usually a browser augmented with one or more plug-ins, that's not too thin, not too thick, but just right. On the server side we incorporate a method of implementing the server of services so it can be spread across a number of servers. This gives us the scale we got in the light client/server without having to replicate the code across the multiple servers. We now have a more encapsulated object model, so we can reuse at the object level.

Our downside is that we have to provide more infrastructure to implement the design. In addition, we've altered our deployment model. While we don't deploy to multiple servers, we have to make sure we deploy objects compatible with existing objects and applications. This makes interface design and maintenance very important. Last, our design is still predominantly client/server.

A Move from Three Tier to N-Tier
Our latest move is to expand the three-tier model to the n-tier model. In this model we expand the ability to tier back to the client. Both client and server use objects to implement functionality. In our browser example CORBA-enabled applets can serve this function.

On the server side the middle tier stratifies into interface, business and data strategies. Objects, both general and specific, implement these strategies. At this point component architecture comes into play because the component provides the model for reuse.

This is where we're at with our architecture. We have applications that embody all of the steps along the way (with the exception of monolithic, I think). With our discussion of who we are, you should have a good idea of why our architecture developed the way it did.

A New Set of Requirements
Performance: A whole set of requirements falls out of our architectural experiences. First, you'll need levels of performance in both development and production. The performance is necessary because, in the rapidly moving environment we all work in, asset RAD is necessary to ensure time to market (even if the market is internal). If you don't believe this, see if you can remember when development cycles were 18 months.

Internet: Departmental projects have needs different from those of general Internet applications. Internet applications require a whole security infrastructure to ensure enterprise integrity. Internet applications have needs different from those of intranet applications. Our internal customers are usually more demanding.

Extranet: Extranet applications have a set of needs different from those of Internet or intranet applications. In the extranet arena you have to protect yourself from external intrusion while ensuring that your customers are the only people getting your content.

You need to examine BOTH applets and applications. Most people look only at applets and forget that the browser-launched application can provide better functionality than an amalgam of applets and HTML. You have to look at the design from all aspects.

Platform Issues: Platforms can be both a liability and an asset. Some platforms lose scalability before others. You need to design for flexibility so that redeployment doesn't involve a part of the software.

Software Configuration Management (SCM) is essential for viability. You must be able to reproduce your work despite turnover, revision or disaster. This requires both discipline and infrastructure investment, but it's a vital link and shouldn't be scrimped on.

Frameworks: Frameworks are essential. Without frameworks you can't achieve the uniformity necessary to reuse objects and components in the environment. Frameworks provide functionality that is necessary but not core to your business. You can purchase a number of these frameworks and save a lot of time and effort. Make sure you get support with them.

Distribution: Distribution is essential. You need a means of distributing your objects and components. If you believe this is an easy problem, remember how many times your Windows 95 system has crashed because some installation program has decided to overlay a new DLL with an old one.

Testing: Testing is different in this environment. As your network is now part of your environment, your testing must include it. This means that you not only need to test on every platform your application could possibly be used on, you must also load-test across the network. Measure all the loads you expect the project to be used at. Then continue the test to overload so you know when to upscale the implementation.

A browser must be part of the configuration. If you can, specify one type of browser and restrict the number of plug-ins to a reasonable number. This lowers the desktop update requirements.

We want to open standards to achieve our goals. We define open standards as ones in which the specifications are published by the promulgator and at least two vendors supply product. While you usually cannot adhere 100% to open standards, you need to control when you deviate and what you lose by that deviation.

Java as the Solution
We chose Java to provide our cross-platform capabilities. We have experience with C++ in multiple environments and know how expensive it is to maintain. If Java lowers this cost, it is a significant saving. In addition, we have the option of creating applets with Java.

CORBA supplies the middleware. CORBA is platform- and language-agnostic. This allows us to decide how and what is used to implement our applications. In addition, the interface description language (IDL) provides an ideal, compileable medium for interface specification. The CORBA component, transaction and security models provide the essential services necessary to build our applications.

C++ provides the high-performance parts. Most of us have a large body of custom C++ code that we have no desire or, sometimes, ability to replace. JAVA is not a high-performance mathematical language - yet. Because of our choice of CORBA, we can be agnostic about which language components are constructed from.

JBuilder as the Java IDE
For our Java IDE, we chose JBuilder from Inprise. We are already a Delphi house and knew how to use the environment. Since we had a strong NeXTStep background, we wanted a strong component development environment. We had already seen this in the Delphi environment. With JBuilder's implementation of JavaBeans as their component model, we felt confident in our choice. Our experience with switching from code to graphics and back, seamlessly and without encumbering comment tags, has enhanced the speed and reliability of code development. 100% Pure Java generation means that I have to generate the code once and simply test it on each of the possible deployment platforms. This is a significant saving of labor versus either C++ or designers that use custom heavyweight components.

VisiBroker for CORBA Implementation
VisiBroker provides the CORBA implementation. It has excellent Java and C++ implementations. It is well integrated with JBuilder and thus is easier for developers to use (they don't have to leave their environment to use the tool).

PVCS, NetDynamics and Mercury Interactive as Additional Tools
PVCS provides our SCM. It is integrated with JBuilder and also works with our server-side Java framework, NetDynamics. It was already a standard within the bank. This made deployment and support infrastructure consistent with our other development environments. It also supports both ASCII and non-ASCII code parts.

NetDynamics provides the server-side Web development framework. As HTML is the glue layer in our presentation layer, we need a robust server-side generation engine. We extensively examine the framework market on a regular basis. In our environment NetDynamics consistently exceeds the others as a production system. The incorporation of VisiBroker CORBA as their ORB makes for a wide CORBA pipe from our application presentation layer to the back end.

Rounding out our development environment are our Mercury Interactive test tools. These tools were chosen because the test scripts written at any point in the test procedure can be utilized at other steps. Thus the unit tests can be used as part of the integration test, and so on. In addition, their load-testing tool can be used to properly test NetDynamics.

What we see as needed future directions encompass test tools, remote debugging, a more general component model, application-specific and general servers, and more commercially available components. The test tools should give better insight into the applications function at the JVM level as well as provide code coverage and performance analysis. Remote debugging should be capable of debugging code anywhere on the network. Servers should provide support for the Enterprise JavaBeans and CORBA component models. Components should evolve from the basic "list, queue and dequeue" models.

I hope our experiences at NationsBank provide some insight into how and why our architecture has evolved and what tools we've chosen to implement this architecture.

About the Author
John Melka has over 25 years' experience in design and development engineering in multiple disciplines. He currently works for NationsBank Services, Inc. He can be reached at [email protected]

 

All Rights Reserved
Copyright ©  2004 SYS-CON Media, Inc.
  E-mail: [email protected]

Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.