HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML

Java New Input/Output
Incorporate NIO functionality in your applications

Input/Output (I/O) is one of the fundamental aspects of computing that you have to deal with at some point during the application's development phase. Dealing with I/O presents its own challenges because I/O access is still slow. For example, reading or writing data from or to a disk involves a choreography of electronic, mechanical, and computer engineering disciplines, making the task slow in comparison to, say, reading or writing from random access memory over a high-speed bus.

To minimize the impact of working with slow I/O mediums and to maximize throughput and performance, software designers have devised a variety of strategies. These strategies include, but are not limited to, vectored I/O (scatter/gather) and multiplexing I/O. These strategies can be found in use in I/O-intensive applications, typically written in C or C++. However, in the world of Java such strategies were not applicable and a developer who wants to write a scalable, high-performance I/O-intensive application would, most likely, resort to writing native code.

This situation drastically changed with the introduction of the New Input/Output (NIO) packages along with the Merlin release (JDK 1.4). The NIO packages introduced an array of new functionalities, including improved performance in Buffer Management, Scalable Network & File I/O, Character-Set support, and Regular Expression Matching. All of these enable a developer to write portable, high-performance, and scalable I/O-intensive applications.

This article provides an overview for developers who are planning to incorporate the NIO functionality in their application and for developers who are looking for technologies that enable them to write high-performance I/O applications. This article provides an introduction to the buffers, channels, memory-mapped files, file locking, and multiplexing I/O.

High-Performance Input/Output
With the current I/O architecture (JDK versions prior to 1.4), developers used streams to perform I/O operations. The basic sets of streams that make up the bulk of the I/O are byte streams and character streams. These streams provide APIs to carry out fundamental operations, such as reading data from a stream into an array and writing data from an array into a stream. The stream-based I/O blocked while the core I/O operations took place. In other words, a read() method does not return until all the data is read from the stream and, similarly, a write method does not return until all the data is written to the stream. This blocking nature of I/O resulted in performance bottlenecks while writing high-performance I/O-intensive applications.

Apart from blocking, working on raw bytes as they become available is a cumbersome process. Buffering the data as a method of improving the performance was not incorporated in the core design of streams. This means that every single byte written to a stream is passed on to the operating system and then flushed to the physical or the network medium. This is a very inefficient way of handling I/O and results in performance issues.

A common strategy was to explicitly insert a buffering class following the decorator design pattern. This mechanism is incorporated in the core I/O API by means of buffered streams such as BufferedReader. To use buffered streams you have to pipe a nonbuffered stream into a buffered stream and then work with it. Working with buffered streams certainly alleviates the issue of working with raw bytes, but then the buffering logic was hidden deep in the bowels of the buffered streams. This strategy provided the application developer with little or no control over the buffers. Also, using these buffers is not very efficient because it involves a lot of data copying from user space buffers to O/S buffers via the JVM buffer.

The NIO framework introduced with JDK version 1.4 addressed all the shortcomings of stream (and buffered stream) based I/O and also provided a suite of new functionalities. To understand the fundamental concept of NIO, there are two aspects that need to be mastered: buffers and channels. In the following sections, we will look at buffers and channels in detail.

Buffers
A buffer is a container that can hold a finite and contiguous sequence of primitive data types. It's essentially an object wrapper around an array of bytes with imposed limits. A container that is 180 degrees apart from the buffer is an ArrayList, which, in theory, is capable of holding an unlimited amount of data. The buffers were introduced not only to provide the application developer with more control, but also to speed up the I/O application.

The buffer is implemented as an abstract class in the java.nio package and has seven direct descendants: ByteBuffer, CharBuffer, DoubleBuffer, FloatBuffer, IntBuffer, LongBuffer, and ShortBuffer. A buffer is characterized by three important properties: capacity, limit, and position. These properties are set during buffer creation and during buffer manipulation. A buffer is created in two possible ways - either by calling the allocateDirect() factory method or by calling the allocate() factory method. Both of these methods take an int as a parameter, which represents the newly created buffer's capacity. The difference between the two buffer creation strategies is that the allocateDirect() method creates a native buffer that is outside the JVM heap. This circumvents the extra copying that's normally required between the JVM buffer and O/S buffer, resulting in a marked improvement in performance. However, this improvement comes at a price - there's a higher cost associated with the setup and teardown of direct buffers. But with cunning programming practices, such as buffer creation during application startup, the cost of creation/destruction could be balanced down in favor of marked improvement in performance.

ByteBuffer buffer = ByteBuffer.allocate(512);
ByteBuffer directBuffer = ByteBuffer.allocateDirect(512);
String str = "Hello";
byte[] data = str.getBytes();
buffer.put(data);

The first two lines of this code snippet result in the creation of a nondirect ByteBuffer and a direct ByteBuffer. The newly created buffer will have the properties indicated in Figure 1. Data is added to this newly created buffer by calling any of the overloaded put() methods with appropriate parameter(s). In the code example we add the string "Hello" to the byte buffer. After calling put(data) the buffer now has data. The properties of the buffer are altered as indicated in Figure 1.

Figure 1

As we can see, the position pointer now points to the next empty cell after the data. So if we are to use this buffer to read the data and possibly make some business sense out of it, we need to flip the position of the position pointer. This is accomplished with the following code snippet.

buffer.flip();
int limit = buffer.limit();
byte[] data = new byte[limit];
buffer.get(data);
System.out.println(new String(data));

The flip() method readies the buffer for draining by resetting the position and limit pointer. The capacity pointer is left unchanged. After the flip() is called, the position pointer points to the first cell, and the limit pointer points to the cell where the position pointer used to point before the flip() method was called. A clarification on the difference between limit and capacity is in order at this time. Capacity is the maximum number of items a buffer can hold, whereas limit is a value that ranges from zero to capacity, representing an arbitrary limitation for the buffer. The limit is set by calling either the limit() method or the flip() method.

It should be noted that the bytes are ordered in a certain fashion, which is based on the byte ordering supported by the platform. For example, byte ordering in x82 architecture is little-endian and the byte ordering in the Unix platform is big-endian. However, the byte ordering can be explicitly set by the order(ByteOrder byteOrder) method on the buffer. The order() method (without parameters) returns the current byte ordering for this buffer. Figure 2 demonstrates the difference in little-endian and big-endian byte ordering of the magic number of the Java byte code.

Figure 2

Channels
A channel represents an open connection to an entity such as a hardware device, a file, a network socket, or a program component that is capable of performing one or more I/O operations. For example, we could have a SocketChannel or a FileChannel that represents a connection to a socket or to a file, respectively. The main difference between streams and channels is that for the former we need to have separate streams to do reading and writing, but in the latter case we need only one channel to accomplish both reading and writing. The other difference is that streams operate on raw bytes, whereas channels operate on buffers.

The Channel interface and its extension are defined in the java.nio.channels package.

Figure 3 provides a high-level view of the Channel interface and its subinterfaces.

Figure 3

The three interfaces - ByteChannel, ReadableByteChannel, and WritableByteChannel - are fairly self-explanatory about their functions. The ScatteringByteChannel and GatheringByteChannel provide the means for reading or writing a sequence of bytes from multiple buffers in a single invocation. Scattering and gathering, also known as vectored I/O, have been around for a while and are widely used for developing high-performance I/O applications. A real-world example that makes extensive use of this concept at the device level is an SCSI controller. Listing 1 demonstrates the application of ScatteringByteChannel in a conjured-up application - a WAV player that plays WAV files. A WAV file is made up of three major components: the RIFF chuck (12 bytes) that identifies the file as a WAV file; the format chunk (24 bytes) that identifies parameters such as sampling rate, channels, bytes/second, etc.; and the data chunk (the rest) that contains the actual data in bytes. Listing 1 shows how to read all this data in a single invocation.

Scattering, using an array of ByteBuffer, results in a data transfer in a single method invocation. This technique avoids the need for multiple system calls to perform the reads, and combines all reads into one optimized read system call. The result - a performance boost through the means of optimized data transfers to/from variable-size buffers. Similar logic holds true for GatheringByteChannel.

SelectableChannel
A SelectableChannel is a channel that can be multiplexed by a Selector. The details about the workings of the Selector are elaborated on later in the article. The descendants of SelectableChannel are DatagramChannel, Pipe.Sink Channel, Pipe.SourceChannel, ServerSocketChannel, and SocketChannel. In this section we'll quickly glance through the purpose of a few popular channels.

ServerSocketChannel and SocketChannel
ServerSocketChannel is a selectable channel for stream-oriented sockets (ServerSocket). This creates a socket that accepts the inbound client connections. This socket cannot read or write. Binding and socket manipulation must be done by using the channel's peer - the ServerSocket, which can be obtained by the socket() method. SocketChannel is a selectable channel for stream-oriented sockets (Socket). This is an abstraction of the Socket with the added functionality of nonblocking operations. Binding and socket manipulation must be done using the channel's peer - Socket, which can be obtained by the socket() method.

FileChannel
FileChannel is probably the most frequently used channel, as it's used for reading, writing, mapping, and manipulating the file. With FileChannel it's now possible to do more than just reading or writing. Two of the major functionalities that could be accomplished using FileChannel are discussed below.

Memory Mapped Files
Memory mapping of a file involves mapping certain portions of the file or the entire file directly into memory. Any changes that are made to the mapped regions of the file are flushed to the underlying file. This is a piece of functionality that was missing in Java but that could be done using C or C++ starting from MUTICS O/S. These procedural languages use the mmap() function to map a file into the memory. The resulting file pointer is then used to manipulate the file. In Java, with NIO it's now possible to map an entire file or portions of a file into memory using the following code.

File f = new File("LargeMemoryMapFile.dat");
RandomAccessFile raf = new RandomAccessFile(f, "rw");
FileChannel fileChannel = raf.getChannel();
MappedByteBuffer mappedByteBuffer =
fileChannel.map(MapMode.READ_WRITE, 0, f.length());

The map() method of the FileChannel returns a MappedByteBuffer, a particular kind of direct buffer. The main advantage of mapping a file is that the file that's mapped into memory results in the O/S mapping the file as its virtual memory. If a particular portion of the file is modified, only that part is bought into physical memory by means of a page fault. This is particularly advantageous while working with files that are large in size (in order of gigabytes).

As an example, if you were to write a text editor for a Win32-bit system, then you could potentially work with files larger than two gigabytes. Working with such large files could be a major drain on performance unless it's done using memory-mapped I/O. However, when dealing with small files that are on the order of a few kilobytes, memory mapping doesn't translate into any performance advantage.

It is possible to load the contents of the entire file into physical memory using the load() method of the MappedByteBuffer. Mapping the entire file into physical memory using load() will result in a large number of page faults (at the O/S level), which will be a serious drain on the application performance. This is especially true while working with large files. The solution to this problem is to map only certain portions of the file into memory. This is accomplished by providing the appropriate position and size as a parameter for the map() method of FileChannel.

In addition, the mapping could be done in any of the three possible modes: Read Only, Read/Write, and Private. In Read Only mode only the read operation could be done on the buffer. Any attempt to write to the buffer will result in a checked exception being thrown. In Read/Write mode it's possible to carry out both read and write on the buffer. In Private mode, any changes made to the buffer will not be propagated to the file and the changes will not be visible to other programs that might have mapped the same file.

File Locking
With the current I/O, the ability to lock a file is not available as part of the API. To implement this functionality a developer has to write JNI code, thereby making the code nonportable (defeating WORA). The NIO introduced the ability to lock the file, providing clean, consistent, and 100% portable code. File locking is typically required when there is a need for data sharing and mutual exclusion among applications. The file locks are built right into FileChannel and it's now possible to lock files on any O/S that supports the file-locking functionality. This is achieved with the code in Listing 2.

The lock() method acquires an exclusive lock on the file channel. It's also possible to obtain a lock on a section of a file as opposed to locking the entire file. This is possible through the call lock method (long position, long size, boolean shared).

A file lock is held until the release() method is explicitly called or until the file channel is closed. A word of caution: certain O/Ss don't permit a locked file to be mapped into memory and vice versa. Therefore, programs that rely on memory mapping and file locking may not be portable.

Selector
There are times when a developer has to wrestle with handling I/O from multiple data sources. With traditional, blocking stream-based I/O, the typical strategy is to spawn multiple threads to read/write data from multiple streams. This is done because by using a single thread to read/write data from/to multiple streams, we run the risk of blocking when there's no data available in the stream. However, spawning multiple threads is not a target state solution because of the limitation it introduces. A thread when created is allocated up to one megabyte of contiguous memory. In a 32-bit process, we're limited to a maximum of four gigabytes of memory (2^32 bytes) - meaning we'll start getting java.lang.OutOfMemoryError before we hit around 4,000 threads. Although 4,000 threads may sound like too many, for a serious multithreaded high-performance application, 4,000 threads is just not enough (think of a multithreaded Web server).

When handling data from multiple data sources, a procedural language such as C doesn't suffer from Java's limitations. Handling multiple data sources in C is achieved by means of a select() system call. However, with the release of Merlin Java (version 1.4), the inability to deal with multiple data sources in Java was finally put to rest. This release injected new life into the world of scalable I/O by means of multiplexing I/O. Multiplexing I/O becomes possible when a read/write operation doesn't block.

Before we look at nonblocking I/O in detail, let's take a quick peek at traditional blocking I/O. In a stream-based application, when an I/O operation is performed, say a read() operation, the method is blocked until some data is made available.

Figure 4 demonstrates the steps involved in the data transfer between two systems using network sockets.

Figure 4

The data transfer between a client and the server involves multiple buffer copying and transfers through the network. So when the client enters the read() method, the entire data may not be available in the buffer. The client blocks until the entire requested data is made available to the client. In a nonblocking I/O, the read call returns immediately with whatever data is available. However, working with incomplete data does not serve a purpose and therefore the client has to resort to polling, which involves sitting in a tight loop waiting for all the data to become available.

Polling results in burning CPU cycles and is, therefore, considered inefficient. A better mode of operation would be an event-driven mechanism where an appropriate notification takes place when the data becomes available. This is what the Selector mechanism provides. This notification mechanism, coupled with nonblocking functionality, enables developers to write high-performance, scalable I/O-intensive applications.

A note on the Reactor design pattern is in order at this point. The Reactor pattern decouples the events arrivals from event handling. The events arrive at an arbitrary time and are not dispatched immediately. The reactor keeps track of the arrived events and dispatches only when the handler asks for them. This architecture is indicated in Figure 5.

Figure 5

The Selector class that's found in the java.nio.channels package plays the role of the "Reactor" as stated in the Reactor design pattern. The Selector multiplexes events on the SelectableChannel. The channels that extend the SelectableChannel class could be placed into a nonblocking mode using the configureBlocking() method call. This means that the three channels - ServerSocketChannel, SocketChannel, and DatagramChannel - that extend AbstractSelectableChannel could be placed in a nonblocking mode. However, FileChannel cannot be placed in a nonblocking mode.

A reference to the Selector is obtained through the static method open() on the Selector class. A reference to ServerSocketChannel or SocketChannel is obtained through the static method open() on those classes. The ServerSocketChannel and SocketChannel could be placed in nonblocking mode by simply invoking configureBlocking() with false as the method parameter. This is demonstrated in the following code snippet.

Selector selector = Selector.open();
SocketChannel channel = SocketChannel.open();
channel.configureBlocking(false);
channel.register(selector, SelectionKey.OP_CONNECT |
SelectionKey.OP_READ);

The next step is to register the channel with the selector (not the other way around), indicating the events of interest. This is done using the register() method of the SelectableChannel abstract class. The event of interest depends on the type of selectable channel. Table 1 presents valid operations for a few subclasses of the AbstractSelectableChannel.

Table 1

After registering the channel with the selector, the next step is to wait till the Selector indicates that an event of interest has occurred. This is achieved through the select() method, which blocks until an event of interest occurs. When an event of interest does occur, the select() method returns with an integer indicating the number of updated keys. A key is an instance of the SelectionKey class that defines the relationship between the SelectableChannel and the Selector. As an aside, a selection key is created each time a channel is registered with the selector. The key remains valid until it is cancelled by invoking its cancel() method on the SelectionKey. The selector maintains three different sets of selection keys:

  • The key set that represents the set of channels that are registered with the selector
  • The selected key set that represents the set of channels that are detected to be ready for at least one of the operations identified in the key's interest set during a prior selection operation
  • The cancelled key set that represents the set of keys that has been cancelled but whose channels have not yet been deregistered
When the select() method returns the updated set, we need to look for the selected key set that's obtained by the selectedKeys() method. Once we have the selected key list, we iterate through the list of keys. For each key we get the corresponding channel that generated the event of interest by invoking the channel() method. In the case of the server, this method returns ServerSocketChannel, which could be used to accept the incoming client connection. With the SocketChannel you need to check which event might have occurred. The previous code snippet indicated that we have expressed interest in connect and read. The following logic can be used to determine which event caused the select to return.

while (selector.select() > 0 ) {
Set keys = selector.selectedKeys();
Iterator readyIter = keys.iterator();
while (readyIter.hasNext() ){
SelectionKey key = (SelectionKey) readyIter.next();
if (key.isConnectable()) {
// Do something...
} else if (key.isReadable()) {
// Do Something...
}
}

Keep in mind that while iterating over the selected key set, it's imperative that we remove the key that was just obtained. This is necessary because the selected key set might have been updated while the processing was going on. This is achieved with the following code:

while (readyIter.hasNext()) {
// Get key from set
SelectionKey key = (SelectionKey)readyIter.next();
// Remove current entry
readyIter.remove();
...
}

Thus using the Selector class in conjunction with threads enables a developer to write a high-performance, nonblocked, and scalable I/O application. An example reactor implementation is provided along with this article, which details the concepts described above. The source code can be downloaded from www.sys-con.com/java/sourcec.cfm.

Conclusion
The new I/O features introduced in the Java version 1.4 release provide exciting new ways to improve the performance and scalability of your I/O-intensive applications. This is especially good news for server-side developers, who can now use this powerful feature to develop robust applications without resorting to nonstandard coding practices.

References

  • New I/O APIs documentation: http://java.sun.com/j2se/1.4.1/docs/guide/nio/index.html
  • Tanenbaum, A. (2001). Modern Operating Systems, Second Edition. Prentice Hall.
  • Hitchens, R. (2002). "Top Ten New Things You Can Do with NIO." O'Reilly Network: www.onjava.com/pub/a/onjava/2002/10/02/javanio.html

    About the Author
    Vish Krishnan, PMP, is a project lead at Cap Gemini Ernst & Young in Overland Park, KS. He has about eight years of IT experience spanning a variety of technologies. [email protected]

    "Java New Input/Output"
    Vol. 9, Issue 2, p. 48

    	
    
    
    
    Listing 1
     
     	File file = new File(fileName);
      RandomAccessFile raf = new RandomAccessFile(file, "r");
      FileChannel fileChannel = raf.getChannel();
    
      ByteBuffer riffChunck = ByteBuffer.allocate(12);
      ByteBuffer headerChunck = ByteBuffer.allocate(24);
    ByteBuffer dataChunck = 
    ByteBuffer.allocate((int)file.length()-12-24);
    
      ByteBuffer[] wavChunks = {
       riffChunck, headerChunck, dataChunck
      };
    
      fileChannel.read(wavChunks);
      
    Listing 2
    
     	File f = new File("FileToBeLocked.dat");
      RandomAccessFile raf = new RandomAccessFile(f, "rw");
      FileChannel fileChannel = raf.getChannel();
      FileLock fileLock = fileChannel.lock();
      if (fileLock.isValid())
       doSomethingHere();
    
      fileLock.release();
    	 // or close the channel releases the lock!
    
    

    All Rights Reserved
    Copyright ©  2004 SYS-CON Media, Inc.
      E-mail: [email protected]

    Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.