HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML

There was no ticker tape parade to accompany the release of the JDBC 3.0 specification, but many will be pleasantly surprised at its list of enhancements that include everything from performance-tuning options to support for extended-level database features.

This article describes, in detail, the new features that are available in JDBC 3.0 and explains why they are important.

The JDBC 3.0 specification was shipped as part of the J2SE 1.4 release early in 2002. The key goals of the JDBC Expert Panel were to align with the most important features of SQL99, combine all the previous JDBC specifications into a single document, provide a standard way to take advantage of native DBMS functionality, and improve scalability. The JDBC 3.0 specification contains a collection of useful new features, none of which, if taken individually, would be considered "major."

Transactional Savepoints
One of the more useful new features of JDBC 3.0 is transactional savepoints. Traditionally, database transactions have been "all or nothing" types of events. An application would start a transaction, insert some rows, do some updates, and either make the changes permanent (commit) or discard them all (roll back) - all the changes would be made permanent or none of them would be.

With JDBC 3.0, the transactional model is more flexible. An application might start a transaction, insert several rows, and then create a savepoint, which serves as a bookmark. Then, the application might continue by performing some if/then/else type of logic, such as updating a group of rows. With JDBC 2.0, the application at this point would have been forced to either commit or roll back all the changes. In contrast, a JDBC 3.0 application might contain logic that determines that the updates were a bad idea, but the initial inserts were okay. In this case, the application can roll back to the savepoint (the bookmark) and commit the group of inserts as if the updates had never been attempted (see the Listing 1).

Pooling Enhancements
Connection pooling existed in the JDBC 2.0 specification, but JDBC 3.0 provides a much finer granularity of control over the connection objects in the pool. The single most expensive operation in a database application is establishing a connection. With some databases, establishing a database connection can take up to nine network round-trips between the JDBC driver and the database.

Connection pooling essentially involves keeping a cache of database connection objects open and making them available for immediate use by any application that requests a connection. Instead of performing expensive network round-trips to the database server, a connection attempt results in the reassignment of a connection from the local cache to the application. When the application disconnects, the physical tie to the database server isn't severed; instead the connection is placed back into the cache for immediate reuse.

With JDBC 2.0 connection pooling, there were no tuning options. You either used connection pooling or you didn't. With JDBC 3.0, there's a finer level of granularity available to control the characteristics of the connection pool. For example, you can fine-tune the following options to allow for maximum performance and scalability:

  • The minimum number of connections to keep in the pool
  • The maximum number of connections to have in the pool
  • The initial pool size
  • How long connections can remain idle before they're discarded from the pool
  • How often the connection pool should be evaluated to see if it meets the configuration criteria
In addition to connection pooling tuning options, JDBC 3.0 also specifies semantics for providing a statement pool. Similar to connection pooling, a statement pool caches PreparedStatement objects so they can be reused from the cache without application intervention. For example, an application might create a PreparedStatement object similar to the following SQL statement:

select name, address, dept, salary from personnel
where empid = ? or name like ? or address = ?
When the PreparedStatement object is created, the SQL query is validated for syntax and a query optimization plan is produced. The process of creating a PreparedStatement is extremely expensive in terms of performance, especially with some database systems such as DB2. Once the PreparedStatement is closed, a JDBC 3.0-compliant driver places the PreparedStatement in a local cache instead of discarding it. If the application subsequently attempts to create a PreparedStatement with the same SQL query, the driver can retrieve the associated statement from the local cache instead of performing a network round-trip to the server and expensive database validation.

One advantage of connection pooling tuning properties and statement pooling is that their benefits can be realized in existing applications without any code changes. Upgrading an environment to include a JDBC 3.0-compliant application server or driver can improve the performance and scalability of deployed systems, because the JDBC pooling enhancements are not directly invoked by the application's components. Instead, the pooling features are transparently used by the JDBC infrastructure.

Retrieval of Autogenerated Keys
Many databases have hidden columns (called pseudo-columns) that represent a unique key over every row in a table. For example, Oracle and Informix have ROWID pseudo-columns and Microsoft SQL Server provides identity columns. Using these types of columns in a query typically provides the fastest way to access a row, because the pseudo-columns usually represent the physical disk address of the data. Before JDBC 3.0, an application could only retrieve the value of the pseudo-columns by executing a Select statement immediately after inserting the data. For example:

Int rowcount = stmt.executeUpdate (
"insert into LocalGeniusList (name) values ('Karen')");
// insert row

// now get the disk address - rowid - for the newly inserted row
ResultSet rs = stmt.executeQuery (
"select rowid from LocalGeniusList where name = 'Karen'");

This method of retrieving pseudo-columns has two major flaws. The first is that retrieving the pseudo-column requires a separate query to be sent over the network and executed on the server. The second is, because there might not be a primary key over the table, the search condition of the query may not be able to uniquely identify the row. In this case, multiple pseudo-column values could be returned and the application may not be able to figure out which one was actually the value for the most recently inserted row.

An optional feature of the JDBC 3.0 specification is the ability to retrieve autogenerated key information for a row when the row is inserted into a table. This new functionality removes the drawbacks in existing implementations that we discussed, because a separate query is not required to retrieve the key and the application is not responsible for the logic to retrieve the key. For example:

Int rowcount = stmt.executeUpdate (
"insert into LocalGeniusList (name) values ('Karen'),
Statement.RETURN_GENERATED_KEYS);
// insert row AND return key

ResultSet rs = stmt.getGeneratedKeys ();
// key is automatically available

The application contains a value that can be used in a search condition to provide the fastest access to the row and a value that uniquely identifies the row, even when no primary key exists on the table.

Some databases, like DB2, do not provide an autogenerated key such as a pseudo-column. Fortunately, the JDBC 3.0 specification also allows the application to ask for the columns it knows represent a key on the table. For example:

// insert the row and specify that the employee ID be returned as the key
Int rowcount = stmt.executeUpdate (
"insert into LocalGeniusList (name) values ('Karen'),
"employeeID");

ResultSet rs = stmt.getGeneratedKeys ();
// Karen's employeeID value is now available

The ability to retrieve autogenerated keys provides flexibility to the JDBC developer and a way to realize performance boosts when accessing data.

Updating BLOB and CLOB Data Types
The SQL99 specification introduced two new advanced built-in data types, BLOB and CLOB, that provide flexible and fast access to long binary and character data items, respectively. While JDBC 2.0 provided mechanisms to read BLOB and CLOB data, it lacked a standard update method for those types, resulting in several problems for JDBC application developers.

The first problem is that some JDBC driver vendors introduced proprietary update mechanisms for BLOB and CLOB data types. Using proprietary methods inside an API standard makes those applications nonstandard by definition. The second problem is that many JDBC application developers assumed that they could use existing JDBC methods, such as setString, to update the values. Again, this resulted in different behavior on a driver-by-driver case, ultimately making those applications nonstandard.

JDBC 3.0 introduces a standard mechanism for updating BLOB and CLOB data. Note that the BLOB and CLOB interfaces apply only to database systems, such as Oracle, DB2, and Informix, that support the new SQL99 BLOB and CLOB data types. Microsoft SQL Server and Sybase support long varbinary and long varchar data types (image and text) that are similar to the new SQL99 types, but they don't support BLOB and CLOB types. Many JDBC application developers mistakenly think that the BLOB and CLOB interfaces should be used for all types of long data; however, they should be used only with BLOB and CLOB data. The JDBC specification provides methods such as setString to deal with long varchar and long varbinary data. Listing 2 shows how to update the contents of a CLOB column.

Multiple Open Result Set Objects
Data architects routinely build their business integrity rules into database system stored procedures. For example, instead of trying to remember all the tables that must be updated when a new employee is hired, developers commonly use a central stored procedure, perhaps one named AddNewEmployee, that can be called when a new employee is hired. Calling stored procedures can result in table updates and, also, queries being executed that return result sets.

Invoking a single stored procedure can result in multiple result sets being returned to the calling application. For example, a stored procedure named employeeInfo might return the following result sets with:

  • The employee's address and personal information
  • A list of the employee's staff if he or she is a supervisor
  • A list of the projects on which the employee is working
With JDBC 2.0, an application calling the employeeInfo stored procedure would have to serially process the three independent result sets that were generated at execution. If the application needed to first report on the projects list - the third of the sequenced result sets - then it would have to retrieve and discard the first two result sets. This processing methodology is somewhat inflexible and involves programming overhead and complexity. It also doesn't allow simultaneous access to any of the result sets generated from calling a procedure. That restriction limits the flexibility of a JDBC developer.

JDBC 3.0 gives developers the flexibility to decide if they want concurrent access to result sets generated from procedures or if they want the resources to be closed when a new result set is retrieved (JDBC 2.0 compliant behavior). Listing 3 is an example that simultaneously processes results from the employeeInfo procedure.

Miscellaneous Features
We've detailed several new JDBC 3.0 features that range from performance enhancements to development flexibility. There are other JDBC 3.0 enhancements, including the ability to retrieve parameter metadata (allows development flexibility), allowing stored procedure parameters to be called by name (simplifies programming), the ability to specify holdable cursors (improves performance), and more SQL99 advanced data-type alignment.

There's something in JDBC 3.0 that will make every JDBC developer happier and it's available now. Remember, JDBC has both required and optional features, and I've described some of each in this article. Before assuming that a feature is supported in your favorite JDBC 3.0-compliant driver, check the database metadata to make sure optional features are supported. JDBC 3.0 is an approved specification - it's time to utilize the new features. Good luck!

Author Bio
John Goodson is the vice president of research and development for DataDirect Technologies. For nearly 10 years, John has worked closely with Sun and Microsoft on the development of database access standards, and is an active member of the Expert Panel for the JDBC specification evolution. John earned his BS in computer science from Virginia Tech. [email protected]

	


Listing 1

Statement stmt = conn.createStatement ();

Int  rowcount = stmt.executeUpdate ("insert into etable (event) values ('TMM')");

Int rowcount = stmt.executeUpdate ("insert into costs (cost) values (45.0)");

Savepoint sv1 = conn.setSavePoint ("svpoint1");  
// create savepoint for inserts

Int rowcount = stmt.executeUpdate ("delete from employees");     

Conn.rollback (sv1);   
//  discard the delete statement but keep the inserts

Conn.commit;     
// inserts are now permanent





Listing 2

// Retrieve the case history for incident 71164
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery (
		"select CaseHistory from Cases where IncidentID=71164");

rs.next();  // position to the row
Blob data = rs.getClob (1);   
// populate our Blob object with the data
Rs.close();    

// now letšs insert this history into another table
stmt.setClob (1, data);   
// data is Clob object we retrieved from the history table

int InsertCount = stmt.executeUpdate (
"insert into EscalatedIncidents (IncidentID, CaseHistory, Owner)"
+ " Values (71164, ?, 'Goodson') ");

// wešre done ... CLOB data is now in the database




Listing 3

// get ready to call the employeeInfo procedure
CallableStatement cstmt = conn.prepareCall ("{call employeeInfo (?)}");

Cstmt.setInt (1,71164);   
// bind parameter info for employee with id 71164

Boolean RetCode = Cstmt.execute ();   
// call the procedure
// For simplicity we'll bypass logic for the procedure possibly returning update counts

// first result set will be discarded ... materialize it and immediately move to the second
ResultSet DiscardRS = cstmt.getResultSet();      
// materialize first result set

ResultSet EmpListRS = cstmt.getMoreResults ();  
// by default, close DiscardRS
// the 2nd result set: list of employees that report to 71164 is now available

ResultSet ProjectsRS = cstmt.getMoreResults (KEEP_CURRENT_RESULT);
// the 3rd result set is now materialized and we can simultaneously operate on both
//    the employee list and the project list

 

All Rights Reserved
Copyright ©  2004 SYS-CON Media, Inc.
  E-mail: [email protected]

Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.