HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML
 

I recently attended a technical conference and sat in on an interesting discussion concerning moving from traditional testing to testing of object-oriented systems. As many of you know, testing and quality assurance is one of my pet peeves. All too often, groups of otherwise intelligent, experienced software developers become spineless jellyfish when it comes to putting together and sticking with a realistic estimate for the amount of testing needed to ensure that the system under development is completed and debugged. We've all paid the price for this neglect. How many service packs, patches and point updates have you had to install for your commercial software? And that's just the tip of the iceberg. The press rarely gets a glimpse inside the halls of industry to report on the situation within internal development groups. Gross underestimation of the testing effort involved in software development is a significant contributor to our software problems.

One of the root causes of this difficulty is the lack of understanding around the testing process. Many software development organizations have abandoned the traditional waterfall methodology of software development in favor of iterative or rapid-application development approaches. These approaches often have significant business advantages such as shorter time to market that justify the departure from the more rigorous waterfall method, but they have a significant impact on how, when and how much testing must occur. It's common knowledge that a defect discovered in design is several orders of magnitude less expensive to fix than a defect caught after implementation. Likewise, the cost to fix a bug that is discovered after software is shipped or deployed is even greater. The effort required to test in an iterative environment increases because the pace and rapid change involved in RAD approaches often lead to the introduction of new defects, and the reemergence of old ones. Additionally, it's difficult to ensure that sufficient testing occurs before the product is released, so a defect must often be corrected in the next release.

Web and distributed computing add significant complication to this already busy picture. At the simplest level, a distributed, component-based approach requires the creation of testing harnesses because components need to be tested in isolation, as components, in addition to the testing they receive when the entire system is tested. This isn't as easy as exercising a screen, as there is typically no GUI interface for the individual component. In the Java world we create components for either CORBA or EJB. Then we build screens, or pages, to use these components. To unit-test the component, we have to have some program that will exercise the component. So, at a minimum, extra coding is required.

In reality, I'm afraid the situation is somewhat worse. I've known people who believe that object-oriented development decreases the amount of testing required because only the changed objects need to be retested. I can see where they're coming from with this, but I don't think their points are truly valid. Encapsulation of behavior doesn't lead to encapsulation of defects. If this were so, a bug in one of the Windows system DLLs wouldn't be capable of bringing the entire system to a halt.

The only solution I'm aware of for these problems is adequate testing. One expert I spoke with recently suggested that every iteration of a system developed using object-oriented techniques needs to be fully tested, not just unit-tested. Obviously this is a departure from many testing methodologies, where integration and acceptance testing occur only at the end of the process, even when using a RAD approach.

I'm sure you hate discussing testing with project planners and management as much as I do. Many of the proponents of object-oriented programming misunderstand the cost savings involved when they claim it will increase development agility. They neglect to emphasize that these improvements will be measured over time, and are unlikely to be realized in just a single project. It takes several projects, leveraging previous work, to gain this advantage. Nowhere is this more apparent than when we come to the testing area. So next time you're on the spot for a testing estimate, try to get a little education concerning the amount of testing needed. And stick to your guns.

About the Author
Sean Rhody is the editor-in-chief of Java Developer's Journal. He is also a senior consultant with Computer Sciences Corporation, where he specializes in application architecture particularly distributed systems. He can be reached by e-mail at [email protected]

 

All Rights Reserved
Copyright ©  2004 SYS-CON Media, Inc.
  E-mail: [email protected]

Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.