Maxwell's Stuttgart Neural Network Simulator [SNNS] Tutorial

Setting Up

To make life easy with this tutorial, begin by creating a working directory in your home directory. For example purposes, let's call it anns. Make anns your current directory.

Download the following network and pattern files:


Starting Up

Now that you have set up your working environment, go ahead and startup SNNS by typing

snns

When SNNS has started up, it will bring up a startup window and the SNNS manager window. Click anywhere on the startup window to make it go away.  The remaining manager window will look like the following.
Control Window Image

From the manager window you can control which other windows are visible by clicking on the various buttons. The relevant windows for this tutorial are the File window, the Control window, the Info window, the Display window, the Graph window, and the Help window. Note: SNNS' help facility is not bad, and using the Help window will give you more information about a topic than what is covered in this tutorial.


Loading a Network

Click on the File button to bring up the file window.

File Window Picture

Notice that the Network button is highlighted, which means that only network files are displayed in the file list. The other file types will likewise only show files of that type.  The two files types of interest for this tutorial are the Network files and Pattern files. Network files store network configurations and weight values; and the Pattern files store input/output patterns for training and testing purposes.

Go ahead and load the xor network by double-clicking on the name of the file and then clicking on the LOAD button. You should note that SNNS has printed out that the network was loaded from a file. The text appears in the console window in which you started SNNS. SNNS will use this window to keep you up to date on its activities.

Leave the File window open for the next task.

Before loading any more files, however, take a look at the network that you just loaded.  Do this by going back to the Manager window and clicking on the DISPLAY button. The Display window should pop up and look like the following:

Display Window Image

This is a representation of an Artificial Neural Network (arguably, not a very good one).  Note a couple of things. First, each block is a called a node. Each node represents one simple neuron in the network. The integer value above each node is the node's id number. The number below the node is the node's current value. The size of the black square also represents the node's current value. Nodes with values close to 1 have large green squares, while nodes with values close to zero have large blue squares.

In this network, the two nodes at the bottom are  input nodes, the node in the middle is the hidden node, and the node at the top is the output node. Therefore, the current input to the network is a 1.0 and a 1.0, and the current output of the network is 0.104, which is close to 0. As this network has been trained to simulate the XOR function, having an output value of close to 0 for the inputs (1, 1) is correct.

What is missing from the visualization of a an ANN is the weights and links between the nodes. To show these parts of the network, we need to change the setup of the display. To do this, first click on the SETUP button to bring up the Setup window. Display Setup Window Image
1

Note that the buttons across from the links field are not highlighted. The links: ON button tells SNNS to show the network links; the links: -2.35 button tells SNNS to show the connection weights; and the links: -> button tells SNNS to show the direction of the links in the network. Go ahead and click on the first two of these buttons to activate the links and weight displays. Then click on the DONE button to effect the changes. The network Display window should now look like this:

Display Window Image 2

Note the connection lines and the weights that indicate their strength.


Loading a Pattern File

Now you have finished loading a network and displaying it, but the network has no data to process. The next step in working with SNNS is to load a Pattern file which contains input/output pairs. The first thing to do is to return to the File menu and click on the Patterns button so that all of the Pattern files in the directory are displayed in the file list. The resulting picture should look like:
 

Double-click on the file xor and then click on the LOAD button to load it into active memory. In the SNNS text window, it should tell you that it just loaded the pattern file. Conveniently, SNNS is able to have more than one pattern file in active memory at once, so long as the memory is available (it will tell you if it's not). Go ahead and double-click on xor_bigtrain and load that pattern file (by clicking once on the LOAD button), and then double-click on xor_bigtest and load that pattern file.

Now you have three pattern files in active memory, and, as you will see, you can choose which pattern set to use for your training set and which pattern to use for your testing set.

Since we are done loading things for now, go ahead and close the FIle window by clicking the DONE button.

NEVER CLOSE AN SNNS WINDOW BY USING THE XWINDOW MENUS OR DOUBLE-CLICKING ON THE UPPER LEFT SQUARE OF THE WINDOW. THIS WILL CLOSE THE WHOLE SIMULATOR.


Looking at Patterns

One of the important things to do when you are working with an ANN is to understand what it's doing with the inputs that you give it. In other words, you want to watch your network in action. Only then can you get a sense of what it's doing with the inputs.

In order to control what patterns the network sees and execute the backpropagation training algorithm you need to open the Control window by clicking on the CONTROL button in the Manager window. The control window will look like this:

Control Window Image

The editable fields on the left side of the window all deal with training parameters that you can set, and we will cover these a bit later. The various buttons also control various aspects of training and testing patterns. In the middle of the window you will notice the name of the pattern file you loaded last. The upper name indicates the training set that the network would use if you told it to start training right now. The lower name indicates the test set that the network will use to see how well it is generalizing its ability to solve the problem.

Click on the upper button USE button to select xor as the current training set and xor_bigtest as the testing set. Your Control window should now look like the image above.

Now what we would like to do is see how the network performs on each of the four patterns in xor [(0 0), (0 1), (1 0) , (1 1)]. The TEST button in the middle of the second row allows us to step through the current training set, apply each pattern to the network, and calculate the network's output. Go ahead and click on the TEST button four times, watching as SNNS applies the four different patterns to the network. Note that with the two patterns (1 1) and (0 0) the network outputs a value close to zero, while for the other two patterns the network outputs a value close to 1. Also, note that the editable field next to the word PATTERN changes numbers as you click through the different input patterns. The value in the PATTERN field indicates which pattern the network is seeing.

Go ahead and change the training set (upper button) to xor_bigtrain and click through some of these patterns. Notice that the inputs are not clean 0 or 1 values, but in-between. Ask yourself if the output of the network makes sense given the inputs. In general, if the two are similar the output of the network should be close to zero.

Now you have successfully loaded a network, loaded several pattern sets, set the testing and training pattern sets, and stepped through a pattern set and tested the performance of the network on it.


Training a Network

The next step in the tutorial is to re-initialize the XOR network to random weight values and then train it on the XOR function. Note, you can do whatever you want to a network (reset the weights, add nodes, delete nodes) and these changes will only exist in active memory until you explicitly save the network.

The first step in training a network is to initialize the network connection weights to small random values. The INIT button on the top line of the Control window will do this for you. Go ahead and click on the INIT button. Notice that the numbers indicating the connection weights have changed. Now select the xorpattern as the training pattern and use the TEST button to step through and see how the network does. Notice that the network no longer gives appropriate answers for the various input patterns.

Now, during the training process we are going to want to watch the performance of the network so that we know when to stop training (to avoid overtraining). Go back to the Manager window and click on the GRAPH button to bring up the Graph window.

Empty Graph Window

The buttons with the arrows next to the words Scale X and Scale Y control the horizontal and vertical scale of the graph (which you may need to change in order to get a good picture of what is going on. SNNS does not scale the graph automatically in the vertical direction.

Now go back to the Control window. The important buttons during training for a standard feed-forward backpropagation-trained network (which is what we are working with) are, for the most part, on the second line to the right of the word CYCLES. The SINGLE button will do backpropagation training using only the current pattern (whose id is given in the PATTERN field). The ALL button will do backpropagation training using all of the patterns in the current training set. The number of backpropagation passes SNNS will execute is determined by the editable CYCLES field. The order in which the patterns will be shown to the network is determined by the SHUFFLE button. If the SHUFFLE button is not highlighted, then the patterns will be shown to the network in the order in which they appear in the pattern file. If, however, the SHUFFLE button is highlighted, then SNNS will randomly shuffle the patterns before each cycle of backpropagation training. Using the shuffle option is important because otherwise the network can oscillate between certain states as the patterns are shown in the same order for each training epoch.

Since only one epoch of training (seeing all of the patterns just once) doesn't do much for a network, it's worth setting the CYCLES field to 50 or 100 for starters on this problem. Go ahead and do that now. Then click on the SHUFFLE button so that the patterns will be presented in a random order.  Then, set the training set (upper USE button) to xor_bigtrain, and the test set (lower USE button) to xor_bigtest.

After doing these actions, your control window should look like:

Control Window Setup for Training
 

Now go ahead and click on the ALL button to execute 50 or 100 epochs of backpropagation training. Note that nothing appeared in the graph display window. It turns out that you need to change the Y scale by clicking a few times on the right arrow next to the Scale Y text in the Graph window. You should see something like:

Graph Window Showing SSE

What this shows is the sum-squared error of the training set (compare what the network is producing with what the target value is, square it, and sum this up over all of the training patterns). Now go ahead and click on the ALL button a few more times and watch what happens. At somewhere around 1200 epochs the curve should drop and flatten out. An example of what you might see is the following:

Graph Window with full SSE plot

It is important to realize that your individual case may look different. As the INIT function sets the connection weights to random values, no two training runs will look exactly the same. You should also notice that SNNS prints out quantitative information in the window in which you started it. From this quantitative information you can tell exactly how your network is doing.

At this point, let's try to get a better understanding of how well the network has learned the function. From the graph above we can see that the sum-squared error has decreased from about 3 to just over 0.1. Has the network really learned the XOR function and improved its performance?

If we look at the MSE column in the console window (Mean squared error), it has gotten pretty low. In the example shown above, it reached about 0.003, which means at error of less than 1% on the training set values.

Now it's time to see how well the network does on the simple pattern set of (0, 0), (1, 1), (0, 1), and (1, 0). Go ahead and go back to the control window, make xor the current training pattern (top USE button), and then click through the patterns using the TEST button. The network should do reasonably well. The output values for (0, 0) and (1, 1) should be close to zero, while the values for (0, 1) and (1, 0) should be close to 1.


Validation and Overtraining

In the previous training example, during training we only looked at the performance of the ANN on the training set. We then looked at the performance of the network on a small test set (xor) by hand. When we are training an ANN, however, we need to know when to stop training. One of the indicators we use is the SSE curve on the training set, which is what we looked at in the previous section. However, ANNs can be overtrained and start to lose their ability to generalize the function to inputs it has not seen during training. To overcome this problem, we need to periodically test the ANN on a test pattern set. This test pattern set should not be the pattern set being use to train the network, since its purpose is to test the network on inputs it has not seen during training.

SNNS allows us to do this sort of validation testing. The lower USE button in the Control window specifies which pattern set to use as the validation set. Go ahead and set the validation pattern set to xor_bigtest, and reset the training pattern set (upper USE button) to xor_bigtrain. What you may notice is that the Control window now looks exactly the same as it did when we trained in the previous section. It turns out that in order to activate validation testing you have to tell SNNS how often to validate the network. The editable field next to VALID in the Control window currently contains a 0. Change this value to a 10. What this means is that every 10 epochs, SNNS will show all of the validation patterns to the network and calculate the SSE for the pattern set. It will not train the network using the validation set; it will only do the forward pass.

Now reset the network to random connection weights using the INIT button, and change the CYCLES field to 100 so you don't have to click on the ALL button quite as many times. Your control window should now look like:

Control window for validation run

Before starting to train again, go to your Graph window and clear the display by clicking on the CLEAR button. Now begin training again by clicking on the ALL button to execute backpropagation training using the current training set and validating using the validation set.

This time the SSE graph contains two lines. The black line is the training set, the red line the validation set. With these training sets, it turns out that the network does not display overtraining, which is indicated by the SSE for the test set starting to get worse. An example of what you might see in your graph is the following:

Graph window showing training & validation

Now let's try using a different training set. Go ahead and set the training set to xor, and leave xor_bigtest as the validation set. Notice that, unlike xor_bigtrain (which contains 50 sample points of the XOR function), xor_small only contains four sample points. Thus, it does not provide as good a model of the function. Let's see how this affects the training.

Reset the network to random initial weights using the INIT button in the control window. Then, making sure that xor_small is your training set (upper) and xor_bigtest is your validation set (lower), clear your graph and start training (using the ALL button).

You may get a result that looks something like the following:

Example of Overtraining

Now, consider what is happening. The training set (black line) is getting better and better. In fact, the SSE for the training set is approching zero, which means that it is giving pretty much the correct answer for each pattern. However, look at the validation set (red line). When the training set starts to get better, the validation set also starts to get better. At some point, however, the network starts to memorize the training set and its performance on the validation set starts to get worse again. Furthermore, even at its lowest point, the SSE for the validation set is much worse than its best value using xor_bigtrain. What this example really shows is that the size and quality of your training set can strongly affect the performance of the network, even on simple problems.


Creating a network from scratch

The next step in the tutorial is to create a network from scratch. For this example, we'll create a new network for the XOR problem. This network will have two input nodes, two hidden nodes, and one output node. The easiest way to create simple feed-forward networks is to use the bignet tool. Click the BIGNET button on the manager panel and select "general" from the list of options. You should get a window that looks something like the following.

The top section is where you define the nodes of the network, the bottom is where you define the connections. With this tool you create one layer of the network at a time, and you can specify 1D or 2D arrays of nodes (2D arrays are useful for image processing).

Start by creating the input layer. Click on the TYPE buttom until the top field in the right column says "input" (it should by default). Then put a 2 in the next field, and a 1 in the next field. This creates a 2x1 input layer, which is what we want for the XOR function. The z-coordinate field can be blank. Finally, click on the POS buttom to set the relative position for this layer to be "left". When you have filled in the fields correctly, click on ENTER to move the information to the left-hand column, thus creating the current plane.

To create the second (hidden) layer, start by clicking on the TYPE button to set the upper right field to "hidden". Then enter 2 for x and 1 for y. Set the relative position to "below" by clicking on the POS button. Finally, click ENTER to create the second layer.

For the final (output) layer, set the type to "output", make the layer 1x1 (only 1 output node), and leave the relative position at "below". Click ENTER to build the output layer.

For a simple feedfoward network, such as this one, the easiest way to build the connections is to click on the FULL CONNECTION button at the bottom of the window. Finally, click CREATE NET to build the network. SNNS can only work with one network at a time, so it will warn you about destroying the current network and make sure that is what you want to do.

At this point you should see a network that looks like the following:

Saving the network

Just like you used the File window to load networks and pattern files, you can use the same window to save your network in its current state. Just move to the directory in which you want to save the network, type a file name, and click on SAVE. You do not need to add the .net suffix, as SNNS will do it for you.

Now that you have created your own network, go back to the control panel, hit the INIT button and try training it on the XOR problem.


Pattern Files

You may be wondering how to create your own pattern files. Well, it turns out that using SNNS to create pattern files is a slow and painstaking process.  It's far better to write your own program to do it for you; then all you need to know is the format in which SNNS expects to see the files.

Basically, SNNS pattern files have the form

{HEADER}
{DATA}

Where the header looks something like the following:

SNNS pattern definition file V3.2
generated at Wed Oct 30 12:38:53 1996
No. of patterns : 4
No. of input units : 2
No. of output units : 1

The header needs to tell SNNS it's a pattern file, and then tell SNNS how many patterns, how many inputs, and how many outputs.

The data portion of the file looks something like:

1 0 1
0 1 1
0 0 0
1 1 0

Where the first two values are the two input values, and the last value is the 1 output value. If, for example, you had a network with 9 inputs and two outputs, then you would need to have 11 values on each line (the 9 inputs, followed by the 2 outputs). When you are creating patterns, remember that the output value of the default nodes (which use the sigmoid function) in SNNS range from [0..1]. In other words, don't ask the network to generate values outside of those boundaries. Likewise, it is smart to scale your inputs to be in the range [0..1] as well if you are using the standard sigmoid function.


Summary

You should now be able to create simple networks, create pattern files, load networks and pattern files, train a feed-forward backpropagation network, and save the results. In other words, you are ready to start tackling some real problems.

In order to be successful using ANNs, you need to have an attitude of experimentation. SNNS gives you the ability to quickly change network architecture, train using different pattern sets, modify the training parameters, and reinitialize and retrain the network. You should take advantage of these features to try and maximize the performance of your network on the problem at hand.