HomeDigital EditionSys-Con RadioSearch Java Cd
Advanced Java AWT Book Reviews/Excerpts Client Server Corba Editorials Embedded Java Enterprise Java IDE's Industry Watch Integration Interviews Java Applet Java & Databases Java & Web Services Java Fundamentals Java Native Interface Java Servlets Java Beans J2ME Libraries .NET Object Orientation Observations/IMHO Product Reviews Scalability & Performance Security Server Side Source Code Straight Talking Swing Threads Using Java with others Wireless XML

There's a a carefully understated saying, attributed to the ancient Chinese: "May you live in interesting times." While at first glance living in interesting times might have been construed as a blessing, we who live in the present times know that interesting can be a curse. We've taken complexity to a new order of magnitude. It's hard to know what, exactly, we should be interested in. In these interesting times of information overload, it's often difficult to separate the wheat from the chaff.

Those of us in the business of computers know something that isn't always obvious to the layman, although a layman can often sense it. Our computers have not worked as advertised to reduce complexity in our lives, but in many ways have actually increased it. If your TV was as complicated to operate as your PC, you would have traded it in for a radio years ago. If you think about it for more than a couple of minutes, it's fairly obvious that this can't go on. We can't really afford to have every man, woman, and child on the planet become a computer technician. There's more to life than feeding the machines. If you feed a machine it will still be hungry tomorrow, but if you teach a machine how to feed itself, it can become autonomous. Autonomic computing has become a necessity.

GIGO
One of the reasons that computers haven't helped much in dealing with complexity is the simple fact that they do whatever they are programmed to do, and they are programmed by us. Anyone who's taken Computer Science 101 is familiar with the garbage in garbage out phenomenon. Although computers are much faster at certain kinds of serial processing tasks, like arithmetic, that can readily and clearly be expressed, the human brain is still far superior at handling complexity. While genes certainly play an important part in the programming of the human brain, it could be said that the human brain programs itself. In other words, it learns.

The modern French have another saying: "Vive la difference." This saying assumes, naturally, that you can tell the difference well enough to enjoy it. In our interesting times, this isn't always easy. Perhaps it never was, for though the ancient Chinese had the concepts of yang and yin to denote the male and female principles underlying reality, the reality of the Tao as embodied in the famous symbol merges the two as they transition from one into the other. So it may not have been all that easy in the old days either.

If we often have trouble telling the difference, how can we expect a computer to do so? Computers are notorious for their poor tolerance of ambiguity. For example, we as humans don't have much trouble telling the difference between a male name and a female name, as long as the male or female name is one that we have learned as we've assimilated the other artifacts of our culture. Few humans from an English-speaking culture would deny that my wife's name, Eleanor, is female while my name, Michael, is male (I think there once was an actress named Michael, but I never could quite understand how - perhaps that was her intent).

A computer, on the other hand, would not "know" that Michael is a male name or Eleanor a female name unless the names were stored in a table of some kind. Indeed, that is precisely what we often do to "teach" a computer what it needs to "know." This is also precisely why computers are notoriously poor at tolerating ambiguity. If the foremost neurologists were to look inside my brain, or for that matter your brain, they would be hard pressed to locate where the table of cross references, from name to gender, is located. In fact, the very best of them could not locate it because it doesn't exist.

Electronic Brain
You don't have to be a brain surgeon to know that what does exist, in my brain and yours, are about two and a half pounds of tissue composed of neurons, axons, ganglia, and synapses. These countless cells combine in networks of networks of neurons of unimaginable (even when using the imaginations that they make possible) depth and breadth. Nature has built with a couple of pounds of meat a massively parallel computer of unparalleled proportion, unrivaled even by acres of silicon and metal ones. It is only a slight exaggeration to say, then, that our table of name-gender associations is stored everywhere and at the same time nowhere in our brains.

There is a downside to this kind of approach, of course. Learning takes time. Unlike a computer, a database in my brain cannot simply be loaded with a set of name-gender associations. My brain must be repeatedly exposed to these associations, often over many years, and with many mistakes to firmly establish names that are male and names that are female. For a number of reasons, making mistakes is an important part of the learning process (Thomas Edison didn't think in terms of mistakes - he said he'd learned 5,000 ways how not to make a light bulb).

There is also a very bright side to this approach, which completely offsets the downside. Unlike computers, our brains are remarkably tolerant of ambiguity. Once the pattern of associations is established, our brains are exquisitely sensitive to the proximity of a partial pattern to the whole. You will not hesitate, for example, to recognize this name:

Micha_l

Most computer programs (those not employing the algorithms we're about to discuss) would not get a hit in the table of associations (even though 86% of the name is recognizable), and hence would fail to recognize it as a male name. The neural networks in our brains are quite resilient to noise, and can discern patterns even in information that has been heavily distorted.

Likewise, a neural network computer program can be used to recognize objects (classification) and trends (prediction) with considerable accuracy, as it uses a similar approach to pattern learning and recognition as that found in nature (see "Parallel Distributed Processing" in the reference section).

Agent Building and Learning Environment
The Agent Building and Learning Environment, or ABLE, is a complete environment for designing, testing, and implementing Java-based artificial intelligence agents. The ABLE framework was developed by researchers at IBM's T.J. Watson Research Center. In terms of artificial intelligence features supported, ABLE covers the waterfront, and you would be hard pressed to think of an artificial intelligence paradigm that it does not employ. It is an AI tour de force.

Each paradigm is implemented as a JavaBean, and the collected beans are called AbleBeans. While this article focuses only on a single paradigm, that of neural network classification, there are many others, including Temporal Difference Learning, Na´ve Bayes, and Radial Basis Function.

In addition to these artificial intelligence paradigms, ABLE provides agent beans for buyer/seller conversation logic, and for developing agents to automatically tune (autotune) computer systems and networks. There's also a complete ABLE Ruleset Language (ARL) for incorporating expert system paradigms, such as forward and backward chaining and predicate and fuzzy logic, into an application.

In the spirit of the best IDEs, ABLE enables you to incorporate intelligent agents into your applications without necessarily being an expert in AI. ABLE allows you to focus on what you want to do and not on how it is done.

The first step is to download ABLE from the IBM alphaWorks Web site (www.alphaworks.ibm.com/tech/able). The Web site not only contains the distributions for Windows and Linux, but it has a lot of information about ABLE, including a moderated news group.

There are currently five downloads available, but the two related to Linux are in tar/gzip format. One tar file contains the executable and the other contains the help and Javadoc files. Even if space is an issue, I would recommend installing both, since the help files contain a lot of valuable information related to installation and use, as well as a complete tutorial. One important consideration (but apparently often overlooked - and not just by me, if the newsgroup threads are an indicator) is that you must be sure to have Java2 v1.3 installed on your system (and this is true whether you use Linux or Windows).

Simply untar the compressed file to a target directory (for example, able) and the ABLE files and documentation will be stored appropriately in a directory tree below it. This will also create a couple of preferences files. Next, change directories to go to the ..\able1_4_0\bin directory. You will find several shell scripts there. You can execute either runnit.sh or runjas.sh (ABLE Java Agent Services) to start the ABLE Editor. These scripts make the startup of the ABLE Editor very simple, since all the classpath and JAR information is already provided. Before you start either of these scripts, however, you must ensure that you've either already set a JAVA_HOME environment variable to the location of Java2 v1.3, or you will need to set it within the script by editing it. If all goes as it should, you will see the ABLE Editor displayed before you.

The ABLE Editor is a full featured IDE (written in Java) that can be used to design, test, and debug intelligent agents. It's extremely easy to use, and it takes much of the work out of this process. Before going much further, however, it would probably be a good idea to validate that everything is fully functional at this point. A simple way to do that is to open and run the Animal neural network classification example provided with ABLE. This simple example completely exercises the features of ABLE and is based on the old familiar Prolog animal classification problem. For those who don't remember, the problem is to classify various animals (panther, zebra, etc.) based on their various characteristics (four legs, stripes, etc.). From the dropdown menu, select File, then Open Agent, then navigate to the neural directory and select animal.ser. This will load the Animal NeuralClassifierAgent bean, which is comprised of Import, TestImport, InFilter, OutFilter, and BackPropagation components. There are also Inspectors associated with the Infilter, Outfilter, and BackPropagation components. I'll explain each of these in a moment, but for now you can simply validate that they have loaded and displayed properly. If they have, you can hit the Run button (circle of arrows) on the ABLE Editor top panel. This will start the training or testing of the neural network. If you cannot get this far, you'll need to do a bit of research to identify the problem.

Based on my experience, I can tell you that the likelihood of an ABLE problem is small at this point. More likely there's either a Java or a Linux problem. Problems related to Java might include version, location, and font properties. Linux problems could be similar, and include things like resolution settings and file locations. If you run into a situation where ABLE won't start or the components won't load or execute properly, it's likely something in your environment needs to be configured appropriately. A good resource is the newsgroup mentioned earlier, at the ABLE alphaWorks site. Once validation is complete and you're confident that ABLE is working as it should, you can proceed to design your own intelligent agents.

Sex, Machines, and JavaBeans
The SexMachineBean agent I'm about to describe is based on the neural network classification paradigm. (The source code for this article can be downloaded from below.) As I said earlier, this agent can be used to discriminate between male and female names. By definition, a key element of neural network classification is the representation of the data. Since the neural network will be learning patterns in the data, it's extremely important to represent it appropriately. There are two parts to representing data to ABLE. The first is a file (which could be a DB) containing the data itself; the second is the data definition file. In the case of SexMachineBean, the data file contains records that look like this:

A v r i e l x x x x x x female
A v r i l x x x x x x x female
A y d e e x x x x x x x female
A y d r i a x x x x x x female
A z a l e e x x x x x x female
A z e m i n a x x x x x female
A z i a x x x x x x x x female
A z i l e e x x x x x x female
A z u c e n a x x x x x female
A z z o l i n a x x x x female
A a r o n x x x x x x x male
A b a r a m x x x x x x male
A b b e x x x x x x x x male
A b b o t t x x x x x x male
A b d e l x x x x x x x male
A b d i x x x x x x x x male
A b d o o l x x x x x x male
A b d u l x x x x x x x male
A b d u l l a x x x x x male

The data files, by convention, have suffixes of dat. The data definition file (which has a suffix of dfn) for this data looks like this:

name1 categorical input
name2 categorical input
name3 categorical input
name4 categorical input
name5 categorical input
name6 categorical input
name7 categorical input
name8 categorical input
name9 categorical input
name10 categorical input
name11 categorical input
name12 categorical input
gender categorical output

From the data definition file, you can see that there are 13 fields defined for each record or name/gender pair. This is simply because the longest name was 12 characters, and the gender field brings the total to 13. Since not every name occupied 12 characters, it was necessary to pad the names that didn't (I simply used the character x, but an underscore would have done just as well).

It's also important to create both training and test files. This can be as simple as creating two sets of data and definition files. In the case of SexMachineBean, the files are named smb.dat and smb.dfn for training, and smbTest.dat and smbTest.dfn for testing. These should be stored in the examples/datafiles directory. I'll explain how these are used in a moment, but first, go to the ABLE Editor and from the dropdown menu select File, then New, then Default(com.ibm.able.agents.AbleDefaultAgent).

This, by the way, is the default upon opening the ABLE Editor; so if you've just opened the ABLE Editor, you don't need to perform these steps.

Next, go to the Agents tab and you'll see a number of icons. One of these looks like a vertical sorting bin, with two arrows pointing to separate slots in the bin. If you let your mouse hover over this icon, the descriptor AbleNeuralClassifierAgent will be displayed. Push this button, and a neural classifier agent will be added to the workspace. It will appear gray, since it has not yet been configured and therefore is disabled.

Then, highlight the NeuralClassifierAgent in the left pane of the workspace. To configure the new agent, simply right-click the mouse on the NeuralClassifierAgent. Select Properties from the list that will be displayed. You'll be presented with four tabs: General, Neural Classifier, Connections, and Functions. Select the Neural Classifier tab. The others can be ignored for the time being. For the Training File Name, click on the Browse button, navigate to the examples/datafiles directory, and select smb.dfn. Do the same for the Test File Name, but select smbTest.dfn. Then, under Hidden1, enter a value of 10. This will create 10 hidden units on the first intermediate network layer. This can vary depending on the application, and may require some amount of experimentation for optimal training. This number of hidden units is adequate for training the network to recognize all female and male names beginning with the letter A. Finally, click the Generate Beans button, then click OK.

You'll notice that the ABLE Editor will have generated an Import bean, a TestImport bean, an InFilter bean, a BackPropagation bean, and an OutFilter bean. You'll also notice in the GUI panel on the right that the beans are appropriately connected as well. You can open the properties for each of these beans and review the information displayed to get a better understanding of the ABLE environment. Notice that both the Import and TestImport beans are connected to the InFilter bean, but that only one connection is enabled. This is because during the training of the neural network, the ABLE Editor will alternate between testing and training in order to validate what has been learned.

To view the progress of the training process, the ABLE Editor provides a facility for creating Inspectors. Creating an inspector for a bean is as simple as everything else in the ABLE Editor. To create Inspectors for the InFilter and OutFilter beans, highlight each one in turn, right-click, then select Inspect. Two new windows will open, and the values for each filter will be displayed in the corresponding window. The inspector for the BackPropagation bean is created the same way, except that in this case certain parameters should be selected - at least, I selected the following parameters that I found the most informative on training progress: percentCorrect, percentIncorrect, percentUnknown, netEpoch, and netArchitecture. Naturally, under optimal circumstances, percentCorrect should rise to 100, while percentIncorrect and percentUnknown fall to zero. If that doesn't happen, it will be a clear indication that you've either got a problem with data representation or with the neural network architecture you've defined. The amount of data is also a factor in training. For example, if you have a relatively small amount of data, but the neural network never trains or oscillates, you may want to increase the number of hidden units used. Setting the network architecture is somewhat of an art, but there are many references available on the subject. The netEpoch parameter displays the number of passes of the training set, while the netArchitecture parameter displays the defined number of input, hidden, and output units. For any inspector, the parameters to be displayed are selected by choosing Data, then Parameters from the dropdown menu.

Learning About the Birds and the Bees
That's basically all there is to the configuration of SexMachineBean. To start training, simply highlight the NeuralClassifierAgent in the left pane of the workspace once more, then right-click the mouse on the NeuralClassifierAgent and select Properties from the list that will be displayed. Select the Neural Classifier tab. Press the Start button to begin training the neural network. Training will continue until one of three things happens: the minimum percent correct reaches the threshold you've set; the maximum number of passes reaches the threshold you've set; or you hit the Stop button. It's difficult to predict the training time required, since it will vary based on the data, the neural network architecture, and the speed of the machine. The relatively modest training set provided here should not take very long, so please be patient.

Once you have a trained neural network, you can save the neural network state in serialized form. This is true for all ABLE agents, not just for neural networks. To me, this is one of the most significant and valuable contributions of ABLE. ABLE allows you to simply store and reuse in your application whatever has been learned. After all, the ease of use provided by ABLE in designing and building a neural network would be somewhat academic if you couldn't take it any further. Since ABLE allows you to store a trained neural network in Java serialized form, these objects can be re-created anywhere that a JVM will run. Obviously, this provides the intelligent agents created with ABLE a high degree of mobility and versatility. To store an agent in serialized form, simply click File from the dropdown menu. You'll notice that below File are selections for Save and for Export. It is recommended that, when the agent is finished and set to be used in your application, the Export alternative should be used. At this point, just choose a name (like smbA.ser) and proceed to store the bean.

To demonstrate how easy it is to include the trained neural network in an application and use it, I've prepared a simple Java application using WebSphere Studio Application Developer. Naturally, you can use whatever JRE engine you prefer. The application provides a panel in which a name can be entered. When you hit Enter/Return, the gender is returned. If you examine the code, you can see for yourself how straightforward this is. In a nutshell, the neural network objects are restored, the name entered is parsed to conform to the neural network data representation, the parsed name is passed to the neural network as input, and the gender is passed back as output. As I mentioned earlier, if you enter a subset or incomplete name (for example, Adelx instead of Adele), you'll see that the neural network still maintains reasonable accuracy.

As you can see, the Agent Building and Learning Environment is as powerful as it is easy to learn and use. It puts the power of decades of artificial intelligence research within easy reach of any Java developer. In future articles, other facets of the ABLE framework will be explored, such as its use to support autonomic computing applications.

References

  • Rumelhart, D., and McClelland, J. "Parallel Distributed Processing - Explorations in the Microstructure of Cognition." A foundation text on neural networks. Vol. 1. Foundations. MIT Press/Bradford Books, 1986.
  • IBM Systems Journal, v. 41 no. 3, 2002. Applications of Artificial Intelligence: www.research.ibm.com/journal
  • IBM Systems Journal, v. 42 no. 1, 2003. Autonomic Computing: www.research.ibm.com/journal

    About The Author
    Mike Fichtelman is a certified senior project manager at IBM supporting their Web hosting business. He has over 20 years' experience in the information technology field as a developer, designer, and project manager. Mike has an MBA from Hofstra University and his work has been published in a number of journals on subjects ranging from infrastructure to the development of wireless applications using Java, XML, and WAP. He also teaches an e-business course in the MBA program at the University of Phoenix. [email protected]

    Source Code for this Article zip file ~271 KB

    All Rights Reserved
    Copyright ©  2004 SYS-CON Media, Inc.
      E-mail: [email protected]

    Java and Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. SYS-CON Publications, Inc. is independent of Sun Microsystems, Inc.